Tag: Rankings

  • The National Institutes of Health shouldn’t use FIRE’s College Free Speech Rankings to allocate research funding — here’s what they should do instead

    The National Institutes of Health shouldn’t use FIRE’s College Free Speech Rankings to allocate research funding — here’s what they should do instead

    In December, The Wall Street Journal reported:

    [President-elect Donald Trump’s nominee to lead the National Institutes of Health] Dr. Jay Bhattacharya […] is considering a plan to link a university’s likelihood of receiving research grants to some ranking or measure of academic freedom on campus, people familiar with his thinking said. […] He isn’t yet sure how to measure academic freedom, but he has looked at how a nonprofit called Foundation for Individual Rights in Education scores universities in its freedom-of-speech rankings, a person familiar with his thinking said.

    We believe in and stand by the importance of the College Free Speech Rankings. More attention to the deleterious effect restrictions on free speech and academic freedom have on research at our universities is desperately needed, so hearing that they are being considered as a guidepost for NIH grantmaking is heartening. Dr. Bhattacharya’s own right to academic freedom was challenged by his Stanford University colleagues, so his concerns about its effect on NIH’s grants is understandable.

    However, our College Free Speech Rankings are not the right tool for this particular job. They were designed with a specific purpose in mind — to help students and parents find campuses where students are both free and comfortable expressing themselves. They were not intended to evaluate the climate for conducting academic research on individual campuses and are a bad fit for that purpose. 

    While the rankings assess speech codes that apply to students, the rankings do not currently assess policies pertaining to the academic freedom rights and research conduct of professors, who are the primary recipients of NIH grants. Nor do the rankings assess faculty sentiment about their campus climates. It would be a mistake to use the rankings beyond their intended purpose — and, if the rankings were used to deny funding for important research that would in fact be properly conducted, that mistake would be extremely costly.

    FIRE instead proposes three ways that would be more appropriate for NIH to use its considerable power to improve academic freedom on campus and ensure research is conducted in an environment most conducive to finding the most accurate results.

    1. Use grant agreements to safeguard academic freedom as a strong contractual right. 
    2. Encourage open data practices to promote research integrity.
    3. Incentivize universities to study their campus climates for academic freedom.

    Why should the National Institutes of Health care about academic freedom at all?

    The pursuit of truth demands that researchers be able to follow the science wherever it leads, without fear, favor, or external interference. To ensure that is the case, NIH has a strong interest in ensuring academic freedom rights are inviolable. 

    As a steward of considerable taxpayer money, NIH has an obligation to ensure it spends its funds on high-quality research free from censorship or other interference from politicians or college and university administrators.

    Why the National Institutes of Health shouldn’t use FIRE’s College Free Speech Rankings to decide where to send funds

    FIRE’s College Free Speech Rankings (CFSR) were never intended for use in determining research spending. As such, it has a number of design features that make it ill-suited to that purpose, either in its totality or through its constituent parts.

    Firstly, like the U.S. News & World Report college rankings, a key reason for the creation of the CFSRs was to provide information to prospective undergraduate students and their parents. As such, it heavily emphasizes students’ perceptions of the campus climate over the perceptions of faculty or researchers. In line with that student focus, our attitude and climate components are based on a survey of undergraduates. Additionally, the speech policies that we evaluate and incorporate into the rankings are those that affect students. We do not evaluate policies that affect faculty and researchers, which are often different and would be of greater relevance to deciding research funding. While it makes sense that there may be some correlation, we have no way of knowing whether or the degree to which that might be true.

    Secondly, for the component that most directly implicates the academic freedom of faculty, we penalize schools for attempts to sanction scholars for their protected speech, as tracked in our Scholars Under Fire database. While our Scholars Under Fire database provides excellent datapoints for understanding the climate at a university, it does not function as a systematic proxy for assessing academic freedom on a given campus as a whole. As one example, a university with relatively strong protection for academic freedom may have vocal professors with unpopular viewpoints that draw condemnation and calls for sanction that could hurt its ranking, while a climate where professors feel too afraid to voice controversial opinions could draw relatively few calls for sanction and thus enjoy a higher ranking. This shortcoming is mitigated when considered alongside the rest of our rankings components, but as discussed above, those other components mostly concern students rather than faculty.

    Thirdly, using CFSR to determine NIH funding could — counterintuitively — be abused by vigilante censors. Because we penalize schools for attempted and successful shoutdowns, the possibility of a loss of NIH funding could incentivize activists who want leverage over a university to disrupt as many events as possible in order to negatively influence its ranking, and thus its funding prospects. Even the threat of disruption could thus give censors undue power over a university administration that fears loss of funding.

    Finally, due to resource limitations, we do not rank all research universities. It would not be fair to deny funding to an unranked university or to fund an unranked university with a poor speech climate over a low-ranked university.

    Legal boundaries for the National Institutes of Health as it considers proposals for actions to protect academic freedom

    While NIH has considerable latitude to determine how it spends taxpayer money, as an arm of the government, the First Amendment places restrictions on how NIH may use that power. Notably, any solution must not penalize institutions for protected speech or scholarship by students or faculty unrelated to NIH granted projects. NIH could not, for example, require that a university quash protected protests as a criteria for eligibility, or deny a university eligibility because of controversial research undertaken by a scholar who does not work on NIH-funded research.

    While NIH can (and effectively must) consider the content of applications in determining what to fund, eligibility must be open to all regardless of viewpoint. Even were this not the case as a constitutional matter (and it is, very much so), it is important as a prudential matter. People would be understandably skeptical of, if not downright disbelieve, scientific results obtained through a grant process with an obvious ideological filter. Indeed, that is the root of much of the current skepticism over federally funded science, and the exact situation academic freedom is intended to avoid.

    Additionally, NIH cannot impose a political litmus test on an individual or an institution, or compel an institution or individual to take a position on political or scientific issues as a condition of grant funding.

    In other words, any solution to improve academic freedom:

    • Must be viewpoint neutral;
    • Must not impose an ideological or political litmus test; and
    • Must not penalize an institution for protected speech or scholarship by its scholars or students.

    Guidelines for the National Institutes of Health as it considers proposals for actions to protect academic freedom

    NIH should carefully tailor any solution to directly enhance academic freedom and to further NIH’s goal “to exemplify and promote the highest level of scientific integrity, public accountability, and social responsibility in the conduct of science.” Going beyond that purpose to touch on issues and policies that don’t directly affect the conduct of NIH grant-funded research may leave such a policy vulnerable to legal challenge.

    Any solution should, similarly, avoid using vague or politicized terms such as “wokeness” or “diversity, equity, and inclusion.” Doing so creates needless skepticism of the process and — as FIRE knows all too well — introduces uncertainty as professors and institutions parse what is and isn’t allowed.

    Enforcement mechanisms should be a function of contractual promises of academic freedom, rather than left to apathetic accreditors or the unbounded whims of bureaucrats on campus or officials in government, for several reasons. 

    Regarding accreditors, FIRE over the years has reported many violations of academic freedom to accreditors who require institutions to uphold academic freedom as a precondition for their accreditation. Up to now, the accreditors FIRE has contacted have shown themselves wholly uninterested in enforcing their academic freedom requirements.

    When it comes to administrators, FIRE has documented countless examples of campus administrators violating academic freedom, either due to politics, or because they put the rights of the professor second to the perceived interests of their institution.

    As for government actors, we have seen priorities and politics shift dramatically from one administration to the next. It would be best for everyone involved if NIH funding did not ping-pong between ideological poles as a function of each presidential election, as the Title IX regulations now do. Dramatic changes to how NIH conceives as academic freedom with every new political administration would only create uncertainty that is sure to further chill speech and research.

    While the courts have been decidedly imperfect protectors of academic freedom, they have a better record than accreditors, administrators, or partisan government officials in parsing protected conduct from unprotected conduct. And that will likely be even more true with a strong, unambiguous contractual promise of academic freedom. Speaking of which…

    The National Institutes of Health should condition grants of research funds on recipient institutions adopting a strong contractual promise of academic freedom for their faculty and researchers

    The most impactful change NIH could enact would be to require as a condition of eligibility that institutions adopt strong academic freedom commitments, such as the 1940 Statement of Principles on Academic Freedom and Tenure or similar, and make those commitments explicitly enforceable as a contractual right for their faculty members and researchers.

    The status quo for academic freedom is one where nearly every institution of higher education makes promises of academic freedom and freedom of expression to its students and faculty. Yet only at public universities, where the First Amendment applies, are these promises construed with any consistency as an enforceable legal right. 

    Private universities, when sued for violating their promises of free speech and academic freedom, frequently argue that those promises are purely aspirational and that they are not bound by them (often at the same time that they argue faculty and students are bound by the policies). 

    Too often, courts accept this and universities prevail despite the obvious hypocrisy. NIH could stop private universities’ attempts to have their cake and eat it too by requiring them to legally stand by the promises of academic freedom that they so readily abandon when it suits them.

    NIH could additionally require that this contractual promise come with standard due process protections for those filing grievances at their institution, including:

    • The right to bring an academic freedom grievance before an objective panel;
    • The right to present evidence;
    • The right to speedy resolution;
    • The right to written explanation of findings including facts and reasons; and
    • The right to appeal.

    If the professor exhausts these options, they may sue for breach of the contract. To reduce the burden of litigation, NIH could require that, if a faculty member prevails in a lawsuit over a violation of academic freedom, the violating institution would not be eligible for future NIH funding until they pay the legal fees of the aggrieved faculty member.

    NIH could also study violations of academic freedom by creating a system for those connected to NIH-funded research to report violations of academic freedom or scientific integrity.

    It would further be proper for NIH to require institutions to eliminate any political litmus tests, such as mandatory DEI statements, as a condition of grant eligibility.

    The National Institutes of Health can implement strong measures to protect transparency and integrity in science

    NIH could encourage open science and transparency principles by heavily favoring studies that are pre-registered. Additionally, to obviate concerns that scientific results may be suppressed or buried because they are unpopular or politically inconvenient, NIH could require its grant-funded research to make available data (with proper privacy safeguards) following the completion of the project. 

    To help deal with the perverse incentives that have created the replication crisis and undermined public trust in science, NIH could create impactful incentives for work on replications and the publication of null results.

    Finally, NIH could help prevent the abuse of Institutional Review Boards. When IRB review is appropriate for an NIH-funded project, NIH could require that review be limited to the standards laid out in the gold-standard Belmont Report. Additionally, it could create a reporting system for abuse of IRB processes to suppress, or delay beyond reasonable timeframes, ethical research, or violate academic freedom.

    The National Institutes of Health can incentivize study into campus climates for academic freedom

    As noted before, FIRE’s College Free Speech Rankings focus on students. Due to logistical and resource difficulties surveying faculty, our 2024 Faculty Report looking into many of the same issues took much longer and had to be limited in scope to 55 campuses, compared to the 250+ in the CFSR. This is to say there is a strong need for research to understand faculty views and experiences on academic freedom. After all, we cannot solve a problem until we understand it. To that effect, NIH should incentivize further study into faculty’s academic freedom.

    It is important to note that these studies should be informational and not used in a punitive manner, or to decide on NIH funding eligibility. This is because tying something as important as NIH funding to the results of the survey would create so significant an incentive to influence the results that the data would be impossible to trust. Even putting aside malicious interference by administrators and other faculty members, few faculty would be likely to give honest answers that imperiled institutional funding, knowing the resulting loss in funding might threaten their own jobs.

    Efforts to do these kinds of surveys in Wisconsin and Florida proved politically controversial, and at least initially, led to boycotts, which threatened to compromise the quality and reliability of the data. As such, it’s critical that any such survey be carried out in a way that maximizes trust, under the following principles:

    • Ideally, the administration of these surveys should be done by an unbiased third party — not the schools themselves, or NIH. This third party should include respected researchers across the political spectrum and no partisan slant.
    • The survey sample must be randomized and not opt-in.
    • The questionnaire must be made public beforehand, and every effort should be made for the questions to be worded without any overt partisanship or ideology that would reduce trust.

    Conclusion: With great power…

    FIRE has for the last two decades been America’s premier defender of free speech and academic freedom on campus. Following Frederick Douglass’s wise dictum, “I would unite with anybody to do right and with nobody to do wrong,” we’ve worked with Democrats, Republicans, and everyone in between (and beyond) to advance free speech and open inquiry, and we’ve criticized them in turn whenever they’ve threatened these values.

    With that sense of both opportunity and caution, we would be heartened if NIH used its considerable power wisely in an effort to improve scientific integrity and academic freedom. But if wielded recklessly, that same considerable power threatens to do immense damage to science in the process. 

    We stand ready to advise if called upon, but integrity demands that we correct the record if we believe our data is being used for a purpose to which it isn’t suited.

    Source link

  • Data, Decisions, and Disruptions: Inside the World of University Rankings

    Data, Decisions, and Disruptions: Inside the World of University Rankings

    University rankings are pretty much everywhere. Though the earliest university rankings in the U. S. date back to the early 1900s and the modern ones from the 1983 debut of the U. S. News and World Report rankings. The kind of rankings we tend to talk about now, international or global rankings, really only date back to 2003 with the creation of the Shanghai Academic Rankings of World Universities.

    Over the decade that followed that first publication, a triumvirate emerged at the top of the rankings pyramid. The Shanghai Rankings, run by a group of academics at the Shanghai Jiao Tong University, the Quacquarelli Symonds, or QS Rankings, and the Times Higher Education’s World University Rankings. Between them, these three rankings producers, particularly QS and Times Higher, created a bewildering array of new rankings, dividing the world up by geography and field of study, mainly based on metrics relating to research.

    Joining me today is the former Chief Data Officer of the Times Higher Education Rankings, Duncan Ross. He took over those rankings at a time when it seemed like the higher education world might be running out of things to rank. Under his tutelage, though, the Times Impact Rankings, which are based around the 17 UN Sustainable Development Goals, were developed. And that’s created a genuinely new hierarchy in world higher education, at least among those institutions who choose to submit to the rankings.  

    My discussion with Duncan today covers a wide range of topics related to his time at THE. But the most enjoyable bit by far, for me anything, was the bit about the genesis of the impact rankings. Listen a bit, especially when Duncan talks about how the Impact Rankings came about because the THE realized that its industry rankings weren’t very reliable. Fun fact, around that time I got into a very public debate with Phil Beatty, the editor of the Times Higher, on exactly that subject. Which means maybe, just maybe, I’m kind of a godparent to the impact rankings. But that’s just me. You may well find other points of interest in this very compelling interview. Let’s hand things over to Duncan.


    The World of Higher Education Podcast
    Episode 3.20 | Data, Decisions, and Disruptions: Inside the World of University Rankings 

    Transcript

    Alex Usher: So, Duncan, let’s start at the beginning. I’m curious—what got you into university rankings in the first place? How did you end up at Times Higher Education in 2015?

    Duncan Ross: I think it was almost by chance. I had been working in the tech sector for a large data warehousing company, which meant I was working across many industries—almost every industry except higher education. I was looking for a new challenge, something completely different. Then a friend approached me and mentioned a role that might interest me. So I started talking to Times Higher Education, and it turned out it really was a great fit.

    Alex Usher: So when you arrived at Times Higher in 2015, the company already had a pretty full set of rankings products, right? They had the global rankings, the regional rankings, which I think started around 2010, and then the subject or field of study rankings came a couple of years later. When you looked at all of that, what did you think? What did you feel needed to be improved?

    Duncan Ross: Well, the first thing I had to do was actually bring all of that production in-house. At the time, even though Times Higher had rankings, they were produced by Clarivate—well, Thomson Reuters, as it was then. They were doing a perfectly good job, but if you’re not in control of the data yourself, there’s a limit to what you can do with it.

    Another key issue was that, while it looked like Times Higher had many rankings, in reality, they had just one: the World University Rankings. The other rankings were simply different cuts of that same data. And even within the World University Rankings, only 400 universities were included, with a strong bias toward Europe and North America. About 26 or 27 percent of those institutions were from the U.S., which didn’t truly reflect the global landscape of higher education.

    So the challenge was: how could we broaden our scope and truly capture the world of higher education beyond the usual suspects? And beyond that, were there other aspects of universities that we could measure, rather than just relying on research-centered metrics? There are good reasons why international rankings tend to focus on research—it’s the most consistent data available—but as you know, it’s certainly not the only way to define excellence in higher education.

    Alex Usher: Oh, yeah. So how did you address the issue of geographic diversity? Was it as simple as saying, “We’re not going to limit it to 400 universities—we’re going to expand it”? I think the ranking now includes over a thousand institutions, right? I’ve forgotten the exact number.

    Duncan Ross: It’s actually around 2,100 or so, and in practice, the number is even larger because, about two years ago, we introduced the concept of reporter institutions. These are institutions that haven’t yet met the criteria to be fully ranked but are already providing data.

    The World University Rankings have an artificial limit because there’s a threshold for participation based on the number of research articles published. That threshold is set at 1,000 papers over a five-year period. If we look at how many universities could potentially meet that criterion, it’s probably around 3,000, and that number keeps growing. But even that is just a fraction of the higher education institutions worldwide. There are likely 30,000—maybe even 40,000—higher education institutions globally, and that’s before we even consider community colleges.

    So, expanding the rankings was about removing artificial boundaries. We needed to reach out to institutions in parts of the world that weren’t well represented and think about higher education in a way that wasn’t so Anglo-centric.

    One of the biggest challenges I’ve encountered—and it’s something people inevitably fall into—is that we tend to view higher education through the lens of our own experiences. But higher education doesn’t function the same way everywhere. It’s easy to assume that all universities should look like those in Canada, the U.S., or the UK—but that’s simply not the case.

    To improve the rankings, we had to be open-minded, engage with institutions globally, and carefully navigate the challenges of collecting data on such a large scale. As a result, Times Higher Education now has data on around 5,000 to 6,000 universities—a huge step up from the original 400. Still, it’s just a fraction of the institutions that exist worldwide.

    Alex Usher: Well, that’s exactly the mission of this podcast—to get people to think beyond an Anglo-centric view of the world. So I take your point that, in your first couple of years at Times Higher Education, most of what you were doing was working with a single set of data and slicing it in different ways.

    But even with that, collecting data for rankings isn’t simple, right? It’s tricky, you have to make a lot of decisions, especially about inclusion—what to include and how to weight different factors. And I think you’ve had to deal with a couple of major issues over the years—one in your first few years and another more recently.

    One was about fractional counting of articles, which I remember went on for quite a while. There was that big surge of CERN-related articles, mostly coming out of Switzerland but with thousands of authors from around the world, which affected the weighting. That led to a move toward fractional weighting, which in theory equalized things a bit—but not everyone agreed.

    More recently, you’ve had an issue with voting, right? What I think was called a cartel of voters in the Middle East, related to the reputation rankings. Can you talk a bit about how you handle these kinds of challenges?

    Duncan Ross: Well, I think the starting point is that we’re always trying to evaluate things in a fair and consistent way. But inevitably, we’re dealing with a very noisy and messy world.

    The two cases you mentioned are actually quite different. One is about adjusting to the norms of the higher education sector, particularly in publishing. A lot of academics, especially those working within a single discipline, assume that publishing works the same way across all fields—that you can create a universal set of rules that apply to everyone. But that’s simply not the case.

    For example, the concept of a first author doesn’t exist in every discipline. Likewise, in some fields, the principal investigator (PI) is always listed at the end of the author list, while in others, that’s not the norm.

    One of the biggest challenges we faced was in fields dealing with big science—large-scale research projects involving hundreds or even thousands of contributors. In high-energy physics, for example, a decision was made back in the 1920s: everyone who participates in an experiment above a certain threshold is listed as an author in alphabetical order. They even have a committee to determine who meets that threshold—because, of course, it’s academia, so there has to be a committee.

    But when you have 5,000 authors on a single paper, that distorts the rankings. So we had to develop a mechanism to handle that. Ideally, we’d have a single metric that works in all cases—just like in physics, where we don’t use one model of gravity in some situations and a different one in others. But sometimes, you have to make exceptions. Now, Times Higher Education is moving toward more sophisticated bibliometric measures to address these challenges in a better way.

    The second issue you mentioned—the voting behavior in reputation rankings—is completely different because it involves inappropriate behavior. And this kind of issue isn’t just institutional; sometimes, it’s at the individual academic level.

    We’re seeing this in publishing as well, where some academics are somehow producing over 200 articles a year. Impressive productivity, sure—but is it actually viable? In cases like this, the approach has to be different. It’s about identifying and penalizing misbehavior.

    At the same time, we don’t want to be judge and jury. It’s difficult because, often, we can see statistical patterns that strongly suggest something is happening, but we don’t always have a smoking gun. So our goal is always to be as fair and equitable as possible while putting safeguards in place to maintain the integrity of the rankings.

    Alex Usher: Duncan, you hinted at this earlier, but I want to turn now to the Impact Rankings. This was the big initiative you introduced at Times Higher Education. Tell us about the genesis of those rankings—where did the idea come from? Why focus on impact? And why the SDGs?

    Duncan Ross: It actually didn’t start out as a sustainability-focused project. The idea came from my colleague, Phil Baty, who had always been concerned that the World University Rankings didn’t include enough measurement around technology transfer.

    So, we set out to collect data from universities on that—looking at things like income from consultancy and university spin-offs. But when the data came back, it was a complete mess—totally inconsistent and fundamentally unusable. So, I had to go back to the drawing board.

    That’s when I came across SDG 9—Industry, Innovation, and Infrastructure. I looked at it and thought, This is interesting. It was compelling because it provided an external framework.

    One of the challenges with ranking models is that people always question them—Is this really a good model for excellence? But with an external framework like the SDGs, if someone challenges it, I can just point to the United Nations and say, Take it up with them.

    At that point, I had done some data science work and was familiar with the tank problem, so I jokingly assumed there were probably 13 to 18 SDGs out there. (That’s a data science joke—those don’t land well 99% of the time.) But as it turned out, there were more SDGs, and exploring them was a real light bulb moment.

    The SDGs provided a powerful framework for understanding the most positive role universities can play in the world today. We all know—well, at least those of us outside the U.S. know—that we’re facing a climate catastrophe. Higher education has a crucial role to play in addressing it.

    So, the question became: How can we support that? How can we measure it? How can we encourage better behavior in this incredibly important sector?

    Alex Usher: The Impact Rankings are very different in that roughly half of the indicators—about 240 to 250 across all 17 SDGs—aren’t naturally quantifiable. Instead, they’re based on stories.

    For example, an institution might submit, This is how we combat organized crime or This is how we ensure our food sourcing is organic. These responses are scored based on institutional submissions.

    Now, I don’t know exactly how Times Higher Education evaluates them, but there has to be a system in place. How do you ensure that these institutional answers—maybe 120 to 130 per institution at most—are scored fairly and consistently when you’re dealing with hundreds of institutions?

    Duncan Ross: Well, I can tell you that this year, over 2,500 institutions submitted approved data—so it’s grown significantly. One thing to clarify, though, is that these aren’t written-up reports like the UK’s Teaching Excellence Framework, where universities can submit an essay justifying why they didn’t score as well as expected—what I like to call the dog ate my student statistics paper excuse. Instead, we ask for evidence of the work institutions have done. That evidence can take different forms—sometimes policies, sometimes procedures, sometimes concrete examples of their initiatives. The scoring process itself is relatively straightforward. First, we give some credit if an institution says they’re doing something. Then, we assess the evidence they provide to determine whether it actually supports their claim. But the third and most important part is that institutions receive extra credit if the evidence is publicly available. If you publish your policies or reports, you open yourself up to scrutiny, which adds accountability.

    A great example is SDG 5—Gender Equality—specifically around gender pay equity. If an institution claims to have a policy on gender pay equity, we check: Do you publish it? If so, and you’re not actually living up to it, I’d hope—and expect—that women within the institution will challenge you on it. That’s part of the balancing mechanism in this process.

    Now, how do we evaluate all this? Until this year, we relied on a team of assessors. We brought in people, trained them, supported them with our regular staff, and implemented a layer of checks—such as cross-referencing responses against previous years. Ultimately, human assessors were making the decisions.

    This year, as you might expect, we’re introducing AI to assist with the process. AI helps us filter out straightforward cases, leaving the more complex ones for human assessors. It also ensures that we don’t run into assessor fatigue. When someone has reviewed 15 different answers to the same question from various universities, the process can get a bit tedious—AI helps mitigate that.

    Alex Usher: Yeah, it’s like that experiment with Israeli judges, right? You don’t want to be the last case before lunch—you get a much harsher sentence if the judge is making decisions on an empty stomach. I imagine you must have similar issues to deal with in rankings.

    I’ve been really impressed by how enthusiastically institutions have embraced the Impact Rankings. Canadian universities, in particular, have really taken to them. I think we had four of the top ten last year and three of the top ten this year, which is rare for us. But the uptake hasn’t been as strong—at least not yet—in China or the United States, which are arguably the two biggest national players in research-based university rankings. Maybe that’s changing this year, but why do you think the reception has been so different in different parts of the world? And what does that say about how different regions view the purpose of universities?

    Duncan Ross: I think there’s definitely a case that different countries and regions have different approaches to the SDGs. In China, as you might expect, interest in the rankings depends on how well they align with current Communist Party priorities. You could argue that something similar happens in the U.S. The incoming administration has made it fairly clear that SDG 10 (Reduced Inequalities) and SDG 5 (Gender Equality) are not going to be top priorities—probably not SDG 1 (No Poverty), either. So in some cases, a country’s level of engagement reflects its political landscape.

    But sometimes, it also reflects the economic structure of the higher education system itself. In the U.S., where universities rely heavily on high tuition fees, rankings are all about attracting students. And the dominant ranking in that market is U.S. News & World Report—the 600-pound gorilla. If I were in their position, I’d focus on that, too, because it’s the ranking that brings in applications.

    In other parts of the world, though, rankings serve a different purpose. This ties back to our earlier discussion about different priorities in different regions. Take Indonesia, for example. There are over 4,000 universities in the country. If you’re an institution like ITS (Institut Teknologi Sepuluh Nopember), how do you stand out? How do you show that you’re different from other universities?

    For them, the Impact Rankings provided an opportunity to showcase the important work they’re doing—work that might not have been recognized in traditional rankings. And that’s something I’m particularly proud of with the Impact Rankings. Unlike the World University Rankings or the Teaching Rankings, it’s not just the usual suspects at the top.

    One of my favorite examples is Western Sydney University. It’s a fantastic institution. If you’re ever in Sydney, take the train out there. Stay on the train—it’s a long way from the city center—but go visit them. Look at the incredible work they’re doing, not just in sustainability but also in their engagement with Aboriginal and Torres Strait Islander communities. They’re making a real impact, and I’m so pleased that we’ve been able to raise the profile of institutions like Western Sydney—universities that might not otherwise get the recognition they truly deserve.

    Alex Usher: But you’re still left with the problem that many institutions that do really well in research rankings have, in effect, boycotted the Impact Rankings—simply because they’re not guaranteed to come first.

    A lot of them seem to take the attitude of, Why would I participate in a ranking if I don’t know I’ll be at the top?

    I know you initially faced that issue with LERU (the League of European Research Universities), and I guess the U.S. is still a challenge, with lower participation numbers.

    Do you think Times Higher Education will eventually crack that? It’s a tough nut to crack. I mean, even the OECD ran into the same resistance—it was the same people saying, Rankings are terrible, and we don’t want better ones.

    What’s your take on that?

    Duncan Ross: Well, I’ve got a brief anecdote about this whole rankings boycott approach. There’s one university—I’m not going to name them—that made a very public statement about withdrawing from the Times Higher Education World University Rankings. And just to be clear, that’s something you can do, because participation is voluntary—not all rankings are. So, they made this big announcement about pulling out. Then, about a month later, we got an email from their graduate studies department asking, Can we get a copy of your rankings? We use them to evaluate applicants for interviews. So, there’s definitely some odd thinking at play here. But when it comes to the Impact Rankings, I’m pretty relaxed about it. Sure, it would be nice to have Oxford or Harvard participate—but MIT does, and they’re a reasonably good school, I hear. Spiderman applied there, so it’s got to be decent. The way I see it, the so-called top universities already have plenty of rankings they can focus on. If we say there are 300 top universities in the world, what about the other 36,000 institutions?

    Alex Usher: I just want to end on a slightly different note. While doing some background research for this interview, I came across your involvement in DataKind—a data charity that, if I understand correctly, you founded. I’ve never heard of a data charity before, and I find the idea fascinating—intriguing enough that I’m even thinking about starting one here. Tell us about DataKind—what does it do?

    Duncan Ross: Thank you! So, DataKind was actually founded in the U.S. by Jake Porway. I first came across it at one of the early big data conferences—O’Reilly’s Strata Conference in New York. Jake was talking about how data could be used for good, and at the time, I had been involved in leadership roles at several UK charities. It was a light bulb moment. I went up to Jake and said, Let me start a UK equivalent! At first, he was noncommittal—he said, Yeah, sure… someday. But I just kept nagging him until eventually, he gave in and said yes. Together with an amazing group of people in the UK—Fran Bennett, Caitlin Thaney, and Stuart Townsend—we set up DataKind UK.

    The concept is simple: we often talk about how businesses—whether in telecom, retail, or finance—use data to operate more effectively. The same is true in the nonprofit sector. The difference is that banks can afford to hire data scientists—charities often can’t. So, DataKind was created to connect data scientists with nonprofit organizations, allowing them to volunteer their skills.

    Of course, for this to work, a charity needs a few things:

    1. Leadership willing to embrace data-driven decision-making.
    2. A well-defined problem that can be analyzed.
    3. Access to data—because without data, we can’t do much.

    Over the years, DataKind—both in the U.S. and worldwide—has done incredible work. We’ve helped nonprofits understand what their data is telling them, improve their use of resources, and ultimately, do more for the communities they serve. I stepped down from DataKind UK in 2020 because I believe that the true test of something successful is whether it can continue to thrive without you. And I’m happy to say it’s still going strong. I kind of hope the Impact Rankings continue to thrive at Times Higher Education now that I’ve moved on as well.

    Alex Usher: Yeah. Well, thank you for joining us today, Duncan.

    Duncan Ross: It’s been a pleasure.

    And it just remains for me to thank our excellent producers, Sam Pufek and Tiffany MacLennan. And you, our viewers, listeners, and readers for joining us today. If you have any questions or comments about today’s episode, please don’t hesitate to get in touch with us at [email protected]. Worried about missing an episode of the World of Higher Education? There’s a solution for that. Go to our YouTube page and subscribe. Next week, our guest will be Jim Dickinson. He’s an associate editor at Wonkhe in the UK, and he’s also maybe the world expert on comparative student politics. And he joins us to talk about the events in Serbia where the student movement is challenging the populist government of the day. Bye for now.

    *This podcast transcript was generated using an AI transcription service with limited editing. Please forgive any errors made through this service.

    Source link

  • Collateral Damage of the Rankings Obsession

    Collateral Damage of the Rankings Obsession

     UF Law slide from 21 to 28 in the US News Law School Rankings. Twenty-eight is not so bad and it, at least, avoids the dreaded 30. (I don’t mean to imply these rankings mean anything except to some University Presidents and law school deans on the make.)

    So why the slip? It’s actually pretty simple. US News began factoring in bar passage rate on which UF Law has historically done miserably given the caliber of students admitted. (Schools with nominally less capable students put UF to shame.)  The reasons UF underachieves is likely due to a number or reasons: a very high curve, students taking many hours of non graded courses often in tangential subjects, very few required bar courses and so on. 

    Since passage has been a problem for decades, why wasn’t it address before? That too has an easy answer. The ranking obsession of the Laura Rosenbury administration  and, I think, her chief benefactor Provost Glover,  did not deem it a pressing matter. Why? Because when only  rankings count and not whether graduating students can pass a bar exam, why worry about it. 

    Don’t get me wrong. I do not know if there is a correlation between passing the bar and succeeding as an attorney. I do know passing the bar is definitely correlated with being permitted to practice law and, there can be no “success” if you can’t get through the door. 

    So, UF is left with the collateral damage caused by a Dean who put self promotion ahead of duty to the students. In fact, I am told that that policy actually exasperated the bar passage issue. When confronted with “splitter students” — those with high LSAT and not comparable GPAs, the policy was to give the nod to the high LSAT students.  Yes, those would be the very bright ones who are likely over confident and lack the work ethic to pass the bar. Seems like a dumb policy but not when you realized that UF thought it did better in the rankings with this policy — that is, until bar passage counted.

    Of course, there is no accountability. Rosenbury is off to Barnard where she continues a policy that personally served her interests at Florida of limiting free speech. If this does not ring a bell, check it out in the Times. Presumably, that policy is also because it pleases those who are higher up.

    Source link

  • Legal Scholarship, Citations, and the Rankings Obsession

    Legal Scholarship, Citations, and the Rankings Obsession

     

    I have not thought much about legal scholarship lately but a few months ago my elitist and ratings-obsessed former dean send out a memo to the faculty promoting the idea of writing things that will be cited. The reason — think about it. It is in the air that USNews rankings may soon use citations as one of the measures in determining rankings.

    This brought to mind an empirical work my coauthor, Amy Mashburn, and I did a couple of years ago. Citations were correlated at statistically significant levels with the ranking of the school from which you graduated, the ranking of the school at which you teach, and the ranking of the law review where your article was published.  Why is this? Likely because law students making publication decisions know they do not know much about law and rely on institutional authority. In fact, it is a common practice when a manuscript arrives to check where the author has published before and their citations. 

    This means that citations have almost nothing to do with the quality of the work. Yet, in the rankings-obsessed world of my former dean, (who I am told also vetoes any entry level candidate who does not come from a ivy league school) quality is irrelevant. 

    But maybe it does not matter that quality is all but irrelevant because law professors rarely engage in scholarship. By that I mean actually trying to discover something that advances our understand of anything. Instead they write OP-ED pieces or legal briefs that are devoted to one side of the story. That is what they were trained to do in law school.

    But the whole citation based on where you went to school or are teaching gets worse — much worse. When Mashburn and I did our study we examined what a citation really meant. Did it mean that the cite work was thought provoking, engaging, controversial, or whatever. No. Citations were almost always just for some fact the cited work cited mentioned whether or not the cited work was also just citing another work that had cited another work, none of which had actually done any legitimate research. In other words, rarely did one law professor give a hoot about what another one said. 

    What this means is that professors at less than top 20 schools should probably be devoting more time to teaching and less to writing. It also means, when and if USNews starts counting citations, the ranking will not change. But, don’t be surprised if raises and promotions for  law professors become dependent on number of citations. 

    As an aside, Malcolm Gladwell, in his series of podcasts now has 2 devoted to the rankings. He notes that in the 70s when there was a battle between Time, Newsweek, and US News which US News was losing badly, the whole ranking thing that new rules higher education was a marking gimmick. 

    Source link