Category: Featured

  • Why the NIH cuts are so wrong (opinion)

    Why the NIH cuts are so wrong (opinion)

    Indirect cost recovery (ICR) seems like a boring, technical budget subject. In reality, it is a major source of the long-running budget crises at public research universities. Misinformation about ICR has also confused everyone about the university’s public benefits.

    These paired problems—concealed budget shortfalls and misinformation—didn’t cause the ICR cuts being implemented by the NIH acting director, one Matthew J. Memoli, M.D. But they are the basis of Memoli’s rationale.

    Trump’s people will sustain these cuts unless academics can create an honest counternarrative that inspires wider opposition. I’ll sketch a counternarrative below.

    The sudden policy change is that the NIH is to cap indirect cost recovery at 15 percent of the direct costs of a grant, regardless of the existing negotiated rate. Multiple lawsuits have been filed challenging the legality of the change, and courts have temporarily blocked it from going into effect.

    Memoli’s notice of the cap, issued Friday, has a narrative that is wrong but internally coherent and plausible.

    It starts with three claims about the $9 billion of the overall $35 billion research funding budget that goes to indirect costs:

    • Indirect cost allocations are in zero-sum competition with direct costs, therefore reducing the total amount of research.
    • Indirect costs are “difficult for NIH to oversee” because they aren’t entirely entailed by a specific grant.
    • “Private foundations” cap overhead charges at 10 to 15 percent of direct costs and all but a handful of universities accept those grants.

    Memoli offers a solution: Define a “market rate” for indirect costs as that allowed by private foundations (Gates, Chan Zuckerberg, some others). The implication is the foundations’ rate captures real indirect costs rather than inflated or wishful costs that universities skim to pad out bloated administrations. On this analytical basis, currently wasted indirect costs will be reallocated to useful direct costs, thus increasing rather than decreasing scientific research.

    There’s a false logic here that needs to be confronted.

    The strategy so far to resist these cuts seems to focus on outcomes rather than on the actual claims or the underlying budgetary reality of STEM research in the United States. Scientific groups have called the ICR rate cap an attack on U.S. scientific leadership and on public benefits to U.S. taxpayers (childhood cancer treatments that will save lives, etc.). This is all important to talk about. And yet these claims don’t refute the NIH logic. Nor do they get at the hidden budget reality of academic science.

    On the logic: Indirect costs aren’t in competition with direct costs because direct and indirect costs pay for different categories of research ingredients.

    Direct costs apply to the individual grant: costs for chemicals, graduate student labor, equipment, etc., that are only consumed by that particular grant.

    Indirect costs, also called facilities and administrative (F&A) costs, support infrastructure used by everybody in a department, discipline, division, school or university. Infrastructure is the library that spends tens of thousands of dollars a year to subscribe to just one important journal that is consulted by hundreds or thousands of members of that campus community annually. Infrastructure is the accounting staff that writes budgets for dozens and dozens of grant applications across departments or schools. Infrastructure is the building, new or old, that houses multiple laboratories: If it’s new, the campus is still paying it off; if it’s old, the campus is spending lots of money keeping it running. These things are the tip of the iceberg of the indirect costs of contemporary STEM research.

    In response to the NIH’s social media announcement of its indirect costs rate cut, Bertha Madras had a good starter list of what indirects involve.

    Screenshot via Christopher Newfield

    And there are also people who track all these materials, reorder them, run the daily accounting, etc.—honestly, people who aren’t directly involved in STEM research have a very hard time grasping its size and complexity, and therefore its cost.

    As part of refuting the claim that NIH can just not pay for all this and therefore pay for more research, the black box of research needs to be opened up, Bertha Madras–style, and properly narrated as a collaborative (and exciting) activity.

    This matter of human activity gets us to the second NIH-Memoli claim, which involves toting up the processes, structures, systems and people that make up research infrastructure and adding up their costs. The alleged problem is that it is “difficult to oversee.”

    Very true, but difficult things can and often must be done, and that is what happens with indirect costs. Every university compiles indirect costs as a condition of receiving research grants. Specialized staff (more indirect costs!) use a large amount of accounting data to sum up these costs, and they use expensive information technology to do this to the correct standard. University staff then negotiate with federal agencies for a rate that addresses their particular university’s actual indirect costs. These rates are set for a time, then renegotiated at regular intervals to reflect changing costs or infrastructural needs.

    The fact that this process is “difficult” doesn’t mean that there’s anything wrong with it. This claim shouldn’t stand—unless and until NIH convincingly identifies specific flaws.

    As stated, the NIH-Memoli claim that decreasing funding for overhead cuts will increase science is easily falsifiable. (And we can say this while still advocating for reducing overhead costs, including ever-rising compliance costs imposed by federal research agencies. But we would do this by reducing the mandated costs, not the cap.)

    The third statement—that private foundations allow only 10 to 15 percent rates of indirect cost recovery—doesn’t mean anything in itself. Perhaps Gates et al. have the definitive analysis of true indirect costs that they have yet to share with humanity. Perhaps Gates et al. believe that the federal taxpayer should fund the university infrastructure that they are entitled to use at a massive discount. Perhaps Gates et al. use their wealth and prestige to leverage a better deal for themselves at the expense of the university just because they can. Which of these interpretations is correct? NIH-Memoli assume the first but don’t actually show that the private foundation rate is the true rate. (In reality, the second explanation is the best.)

    This kind of critique is worth doing, and it can be expanded. The NIH view reflects right-wing public-choice economics that treat teachers, scientists et al. as simple gain maximizers producing private, not public goods. This means that their negotiations with federal agencies will reflect their self-interest, while in contrast the “market rate” is objectively valid. We do need to address these false premises and bad conclusions again and again, whenever they arise.

    However, this critique is only half the story. The other half is the budget reality of large losses on sponsored research, all incurred as a public service to knowledge and society.

    Take that NIH image above. It makes no logical sense to put the endowments of three very untypical universities next to their ICR rates: They aren’t connected. It makes political narrative sense, however: The narrative is that fat-cat universities are making a profit on research at regular taxpayers’ expense, and getting even fatter.

    The only way to deal with this very effective, very entrenched Republican story is to come clean on the losses that universities incur. The reality is that existing rates of indirect cost recovery do not cover actual indirect costs, but require subsidy from the university that performs the research. ICR is not icing on the budget cake that universities can do without. ICR buys only a portion of the indirect costs cake, and the rest is purchased by each university’s own institutional funds.

    For example, here are the top 16 university recipients of federal research funds. One of the largest in terms of NIH funding (through the Department of Health and Human Services) is the University of California, San Francisco, winning $795.6 million in grants in fiscal year 2023. (The National Science Foundation’s Higher Education Research and Development (HERD) Survey tables for fiscal year 2023 are here.)

    table visualization

    UCSF’s negotiated indirect cost recovery rate is 64 percent. This means that it has shown HHS and other agencies detailed evidence that it has real indirect costs in something like this amount (more on “something like” in a minute). It means that HHS et al. have accepted UCSF’s evidence of their real indirect costs as valid.

    If the total of UCSF’s HHS $795.6 million is received with a 64 percent ICR rate, this means that every $1.64 of grant funds has $0.64 in indirect funds and one dollar in direct. The math estimates that UCSF receives about $310 million of its HHS funds in the form of ICR.

    Now, the new NIH directive cuts UCSF from 64 percent to 15 percent. That’s a reduction of about 77 percent. Reduce $310 million by that proportion and you have UCSF losing about $238 million in one fell swoop. There’s no mechanism in the directive for shifting that into the direct costs of UCSF grants, so let’s assume a full loss of $238 million.

    In Memoli’s narrative, this $238 million is the Reaganite’s “waste, fraud and abuse.” The remaining approximately $71 million is legitimate overhead as measured (wrongly) by what Gates et al. have managed to force universities to accept in exchange for the funding of their researchers’ direct costs.

    But the actual situation is even worse than this. It’s not that UCSF now will lose $238 million on their NIH research. In reality, even at (allegedly fat-cat) 64 percent ICR rates, they were already losing tons of money. Here’s another table from the HERD survey.

    table visualization

    There’s UCSF in the No. 2 national position, a major research powerhouse. It spends more than $2 billion a year on research. However, moving across the columns from left to right, you see federal government, state and local government, and then this category, “Institution Funds.” As with most of these big research universities, this is a huge number. UCSF reports to the NSF that it spends more than $500 million a year of its own internal funds on research.

    The reason? Extramurally sponsored research, almost all in science and engineering, loses massive amounts of money even at current recovery rates, day after day, year in, year out. This is not because anyone is doing anything wrong. It is because the infrastructure of contemporary science is very expensive.

    Here’s where we need to build a full counternarrative to the existing one. The existing one, shared by university administrators and Trumpers alike, posits the fiction that universities break even on research. UCSF states, “The University requires full F&A cost recovery.” This is actually a regulative ideal that has never been achieved.

    The reality is this:

    UCSF spends half a billion dollars of its own funding to support its $2 billion total in research. That money comes from the state, from tuition, from clinical revenues and some—less than you’d think—from private donors and corporate sponsors. If NIH’s cuts go through, UCSF’s internal losses on research—the money it has to make up—suddenly jump from an already-high $505 million to $743 million in the current year. This is a complete disaster for the UCSF budget. It will massively hit research, students, the campuses’ state employees, everything.

    The current strategy of chronicling the damage from cuts is good. But it isn’t enough. I’m pleased to see the Association of American Universities, a group of high-end research universities, stating plainly that “colleges and universities pay for 25 percent of total academic R&D expenditures from their own funds. This university contribution amounted to $27.7 billion in FY23, including $6.8 billion in unreimbursed F&A costs.” All university administrations need to shift to this kind of candor.

    Unless the new NIH cuts are put in the context of continuous and severe losses on university research, the public, politicians, journalists, et al. cannot possibly understand the severity of the new crisis. And it will get lost in the blizzard of a thousand Trump-created crises, one of which is affecting pretty much every single person in the country.

    Finally, our full counternarrative needs a third element: showing that systemic fiscal losses on research are in fact good, marvelous, a true public service. A loss on a public good is not a bad and embarrassing fact. Research is supposed to lose money: The university loses money on science so that society gets long-term gains from it. Science has a negative return on investment for the university that conducts it so that there is a massively positive ROI for society, of both the monetary and nonmonetary kind. Add up the education, the discoveries, the health, social, political and cultural benefits: The university courts its own endless fiscal precarity so that society benefits.

    We should also remind everyone that the only people who make money on science are in business. And even there, ROI can take years or decades. Commercial R&D, with a focus on product development and sales, also runs losses. Think of “AI”: Microsoft alone is spending $80 billion on it in 2025, on top of $50 billion in 2024, with no obviously strong revenues yet in sight. This is a huge amount of risky investment—it compares to $60 billion for federal 2023 R&D expenditures on all topics in all disciplines. I’m an AI skeptic but appreciate Microsoft’s reminder that new knowledge means taking losses and plenty of them.

    These up-front losses generate much greater future value of nonmonetary as well as monetary kinds. Look at the University of Pennsylvania, the University of Wisconsin at Madison, Harvard University, et al. in Table 22 above. The sector spent nearly $28 billion of its own money generously subsidizing sponsors’ research, including by subsidizing the federal government itself.

    There’s much more to say about the long-term social compact behind this—how the actual “private sector” gets 100 percent ICR or significantly more, how state cuts factor into this, how student tuition now subsidizes more of STEM research than is fair, how research losses have been a denied driver of tuition increases. There’s more to say about the long-term decline of public universities as research centers that, when properly funded, allow knowledge creation to be distributed widely in the society.

    But my point here is that opening the books on large everyday research losses, especially biomedical research losses of the kind NIH creates, is the only way that journalists, politicians and the wider public will see through the Trumpian lie about these ICR “efficiencies.” It’s also the only way to move toward the full cost recovery that universities deserve and that research needs.

    Source link

  • Counslr Launches in New Mexico and Illinois; Expands Footprint in New York to Increase Access to Mental Health Support

    Counslr Launches in New Mexico and Illinois; Expands Footprint in New York to Increase Access to Mental Health Support

    NEW YORK, NY – Counslr, a leading B2B mental health and wellness platform, announced today that it has expanded its footprint into the State of New Mexico starting with a partnership with Vista Nueva High School, Aztec, NM; and into the State of Illinois starting with a partnership with Big Hollow School District, Ingleside, IL. These initial partnerships will empower students and staff to prioritize their mental health by enabling them to access unlimited wellness resources and live texting sessions with Counslr’s licensed and vetted mental health support professionals, who are available on-demand, 24/7/365. By increasing accessibility to Counslr’s round-the-clock support, Vista Nueva and Big Hollow aim to bridge gaps in mental health support for students and staff, enabling those who previously did not or could not access care, whether due to cost, inconvenience, or stigma, to receive the support they desire.

    1 in 6 youth suffer from a mental illness, but the majority do not receive mental health support due to substantial obstacles to care. Additionally, mental health is even a bigger challenge in rural America due to unique barriers, including fewer providers resulting in longer wait times or insufficient access to crucial mental health services. This resource scarcity underscores the urgency for additional resources and innovative solutions to bridge this critical gap in mental health care for school communities.

    “We are happy to be able to offer students another tool that they can use to support their mental well-being. Knowing that students have been able to speak with a professional outside of school hours helps us know this app was needed and is useful,” states Rebekah Deane, Professional School Counselor, Vista Nueva High School. “We hope this tool also assists students in learning how to navigate systems so that when they graduate high school they know these options exist and they can continue to seek out support when necessary.”

    As factors such as academic pressures, social media influence, burnout and world events contribute to heightened stress levels and mental health challenges, schools throughout the country are recognizing the growing need to offer more accessible resources and preventative mental health services to both students and staff.

    “Counslr provides an extremely easy-to-access platform for those who otherwise may not seek the help they need, and we are very excited to join Counslr in this partnership. We are all very well aware of the impact that technology has had on the mental health of our students and we feel that Counslr can meet our students in a setting they are comfortable with,” states Bob Gold, Big Hollow School District Superintendent. “Outside of our students, we are thrilled to be able to offer this service to the amazing adults who work with our students every day. There are so many families dealing with some sort of trauma, and the life of an educator is no different.  These adults tend to give so much of themselves to their students, so we strongly feel that our efforts here to join with Counslr is our way of providing an opportunity for our educators to focus on their own mental health.”

    In addition to the geographic expansion,Counslr has also expanded its existing footprint in states like New York, most recently partnering with the Silver Creek Central School District to support its students and staff.  

    “We know mental health needs are on the rise, for students and adults.  To me, Counslr is a resource our students and staff both deserve,” states Dr. Katie Ralston, Superintendent, Silver Creek Central. “In the beginning stages at Silver Creek Central, it has proven to be an asset, as it offers access to everyone on the spot, any day, for any situation.”

    “Supporting diverse populations of students and faculty across the country clearly illustrates that mental health knows no boundaries,” said Josh Liss, Counslr CEO. Adding that, “With 86% of Counslr’s users being first-time care seekers, we strive to reach these silent sufferers who need help, but do not or cannot access it, no matter where they are located.”

    ABOUT COUNSLR

    Counslr is a text-based mental health support application that provides unlimited access to robust wellness resources and live texting sessions with licensed professionals, 24/7/365. Users can access support on-demand within two minutes of opening the app, or by scheduled appointment. Through real-time texting, users enjoy one-on-one, private communication with a licensed counselor that can be conducted anytime, anywhere. Counslr was designed to help individuals deal with life’s day-to-day issues, empowering individuals to address concerns while they are “small” to help ensure that they stay “small”. Counslr partners with organizations of all shapes and sizes (companies, unions, nonprofits, universities/colleges, high schools, etc) so that these entities can provide Counslr’s services to their employees/members/students at no direct cost. For more information, please visit www.counslr.com.

    eSchool News Staff
    Latest posts by eSchool News Staff (see all)

    Source link

  • How foreign aid helps the country that gives it

    How foreign aid helps the country that gives it

    In international relations, nation states vie for power and security. They do this through diplomacy and treaties which establish how they should behave towards one another.

    If those agreements don’t work, states resort to violence to achieve their goals. 

    In addition to diplomatic relations and wars, states can also project their interests through soft power. Dialogue, compromise and consensus are all part of soft power. 

    Foreign assistance, where one country provides money, goods or services to another without implicitly asking for anything in return, is a form of soft power because it can make a needy nation dependent or beholden to a wealthier one. 

    In 2023, the U.S. government had obligations to provide some $68 billion in foreign aid spread across more than 10 agencies to more than 200 countries. The U.S. Agency for International Development (USAID) alone spent $38 billion in 2023 and operated in 177 different countries. 

    Spreading good will through aid

    USAID has been fundamental to projecting a positive image of the United States throughout the world. In an essay published by the New York Times, Samantha Power, the former administrator of USAID, described how nearly $20 billion of its assistance went to health programs that combat such things as malaria, tuberculosis, H.I.V./AIDS and infectious disease outbreaks, and humanitarian assistance to respond to emergencies and help stabilize war-torn regions.

    Other USAID investments, she wrote, give girls access to education and the ability to enter the work force. 

    When President John F. Kennedy established USAID in 1961, he said in a message to Congress: “We live at a very special moment in history. The whole southern half of the world — Latin America, Africa, the Middle East, and Asia — are caught up in the adventures of asserting their independence and modernizing their old ways of life. These new nations need aid in loans and technical assistance just as we in the northern half of the world drew successively on one another’s capital and know-how as we moved into industrialization and regular growth.”

    He acknowledged that the reason for the aid was not totally humanitarian.

    “For widespread poverty and chaos lead to a collapse of existing political and social structures which would inevitably invite the advance of totalitarianism into every weak and unstable area,” Kennedy said. “Thus our own security would be endangered and our prosperity imperilled. A program of assistance to the underdeveloped nations must continue because the nation’s interest and the cause of political freedom require it.” 

    Investing in emerging democracies

    The fear of communism was obvious in 1961. The motivation behind U.S. foreign assistance is always both humanitarian and political; the two can never be separated. 

    Today, the United States is competing with China and its Belt and Road Initiative (BRI) for global influence through foreign assistance. The BRI was started by Chinese President Xi Jinping in 2023. It is global, with its Silk Road Economic Belt connecting China with Central Asia and Europe, and the 21st Century Maritime Silk Road connecting China with South and Southeast Asia and Africa and Latin America.

    Most of the projects involve infrastructure improvement — things like roads and bridges, mass transit and power supplies — and increased trade and investment. 

    As of 2013, 149 countries have joined BRI. In the first half of 2023, a total of $43 billion in agreements were signed. Because of its lending policy, BRI lending has made China the world’s largest debt collector.

    While the Chinese foreign assistance often requires repayment, the United States has dispensed money through USAID with no direct feedback. Trump thinks that needs to be changed. “We get tired of giving massive amounts of money to countries that hate us, don’t we?” he said on 27 January 2024. 

    Returns are hard to see.

    Traditionally, U.S. foreign assistance, unlike the Chinese BRI, has not been transactional. There is no guarantee that what is spent will have a direct impact. Soft power is not quantifiable. Questions of image, status and prestige are hard to measure.

    Besides helping millions of people, Samantha Power gave another more transactional reason for supporting U.S. foreign assistance.

    “USAID has generated vast stores of political capital in the more than 100 countries where it works, making it more likely that when the United States makes hard requests for other leaders — for example — to send peace keepers to a war zone, to help a U.S. company enter a new market or to extradite a criminal to the United States — they say yes,” she wrote.

    Trump is known as a “transactional” president, but even this argument has not convinced him to continue to support USAID. 

    Soft power is definitely not part of his vision of the art of the deal.


     

    Three questions to consider:

    1. What is “foreign aid”?
    2. Why would one country give money to another without asking for anything in return?
    3. Do you think wealthier nations should be obliged to help poorer countries?


     

    Source link

  • Data, Decisions, and Disruptions: Inside the World of University Rankings

    Data, Decisions, and Disruptions: Inside the World of University Rankings

    University rankings are pretty much everywhere. Though the earliest university rankings in the U. S. date back to the early 1900s and the modern ones from the 1983 debut of the U. S. News and World Report rankings. The kind of rankings we tend to talk about now, international or global rankings, really only date back to 2003 with the creation of the Shanghai Academic Rankings of World Universities.

    Over the decade that followed that first publication, a triumvirate emerged at the top of the rankings pyramid. The Shanghai Rankings, run by a group of academics at the Shanghai Jiao Tong University, the Quacquarelli Symonds, or QS Rankings, and the Times Higher Education’s World University Rankings. Between them, these three rankings producers, particularly QS and Times Higher, created a bewildering array of new rankings, dividing the world up by geography and field of study, mainly based on metrics relating to research.

    Joining me today is the former Chief Data Officer of the Times Higher Education Rankings, Duncan Ross. He took over those rankings at a time when it seemed like the higher education world might be running out of things to rank. Under his tutelage, though, the Times Impact Rankings, which are based around the 17 UN Sustainable Development Goals, were developed. And that’s created a genuinely new hierarchy in world higher education, at least among those institutions who choose to submit to the rankings.  

    My discussion with Duncan today covers a wide range of topics related to his time at THE. But the most enjoyable bit by far, for me anything, was the bit about the genesis of the impact rankings. Listen a bit, especially when Duncan talks about how the Impact Rankings came about because the THE realized that its industry rankings weren’t very reliable. Fun fact, around that time I got into a very public debate with Phil Beatty, the editor of the Times Higher, on exactly that subject. Which means maybe, just maybe, I’m kind of a godparent to the impact rankings. But that’s just me. You may well find other points of interest in this very compelling interview. Let’s hand things over to Duncan.


    The World of Higher Education Podcast
    Episode 3.20 | Data, Decisions, and Disruptions: Inside the World of University Rankings 

    Transcript

    Alex Usher: So, Duncan, let’s start at the beginning. I’m curious—what got you into university rankings in the first place? How did you end up at Times Higher Education in 2015?

    Duncan Ross: I think it was almost by chance. I had been working in the tech sector for a large data warehousing company, which meant I was working across many industries—almost every industry except higher education. I was looking for a new challenge, something completely different. Then a friend approached me and mentioned a role that might interest me. So I started talking to Times Higher Education, and it turned out it really was a great fit.

    Alex Usher: So when you arrived at Times Higher in 2015, the company already had a pretty full set of rankings products, right? They had the global rankings, the regional rankings, which I think started around 2010, and then the subject or field of study rankings came a couple of years later. When you looked at all of that, what did you think? What did you feel needed to be improved?

    Duncan Ross: Well, the first thing I had to do was actually bring all of that production in-house. At the time, even though Times Higher had rankings, they were produced by Clarivate—well, Thomson Reuters, as it was then. They were doing a perfectly good job, but if you’re not in control of the data yourself, there’s a limit to what you can do with it.

    Another key issue was that, while it looked like Times Higher had many rankings, in reality, they had just one: the World University Rankings. The other rankings were simply different cuts of that same data. And even within the World University Rankings, only 400 universities were included, with a strong bias toward Europe and North America. About 26 or 27 percent of those institutions were from the U.S., which didn’t truly reflect the global landscape of higher education.

    So the challenge was: how could we broaden our scope and truly capture the world of higher education beyond the usual suspects? And beyond that, were there other aspects of universities that we could measure, rather than just relying on research-centered metrics? There are good reasons why international rankings tend to focus on research—it’s the most consistent data available—but as you know, it’s certainly not the only way to define excellence in higher education.

    Alex Usher: Oh, yeah. So how did you address the issue of geographic diversity? Was it as simple as saying, “We’re not going to limit it to 400 universities—we’re going to expand it”? I think the ranking now includes over a thousand institutions, right? I’ve forgotten the exact number.

    Duncan Ross: It’s actually around 2,100 or so, and in practice, the number is even larger because, about two years ago, we introduced the concept of reporter institutions. These are institutions that haven’t yet met the criteria to be fully ranked but are already providing data.

    The World University Rankings have an artificial limit because there’s a threshold for participation based on the number of research articles published. That threshold is set at 1,000 papers over a five-year period. If we look at how many universities could potentially meet that criterion, it’s probably around 3,000, and that number keeps growing. But even that is just a fraction of the higher education institutions worldwide. There are likely 30,000—maybe even 40,000—higher education institutions globally, and that’s before we even consider community colleges.

    So, expanding the rankings was about removing artificial boundaries. We needed to reach out to institutions in parts of the world that weren’t well represented and think about higher education in a way that wasn’t so Anglo-centric.

    One of the biggest challenges I’ve encountered—and it’s something people inevitably fall into—is that we tend to view higher education through the lens of our own experiences. But higher education doesn’t function the same way everywhere. It’s easy to assume that all universities should look like those in Canada, the U.S., or the UK—but that’s simply not the case.

    To improve the rankings, we had to be open-minded, engage with institutions globally, and carefully navigate the challenges of collecting data on such a large scale. As a result, Times Higher Education now has data on around 5,000 to 6,000 universities—a huge step up from the original 400. Still, it’s just a fraction of the institutions that exist worldwide.

    Alex Usher: Well, that’s exactly the mission of this podcast—to get people to think beyond an Anglo-centric view of the world. So I take your point that, in your first couple of years at Times Higher Education, most of what you were doing was working with a single set of data and slicing it in different ways.

    But even with that, collecting data for rankings isn’t simple, right? It’s tricky, you have to make a lot of decisions, especially about inclusion—what to include and how to weight different factors. And I think you’ve had to deal with a couple of major issues over the years—one in your first few years and another more recently.

    One was about fractional counting of articles, which I remember went on for quite a while. There was that big surge of CERN-related articles, mostly coming out of Switzerland but with thousands of authors from around the world, which affected the weighting. That led to a move toward fractional weighting, which in theory equalized things a bit—but not everyone agreed.

    More recently, you’ve had an issue with voting, right? What I think was called a cartel of voters in the Middle East, related to the reputation rankings. Can you talk a bit about how you handle these kinds of challenges?

    Duncan Ross: Well, I think the starting point is that we’re always trying to evaluate things in a fair and consistent way. But inevitably, we’re dealing with a very noisy and messy world.

    The two cases you mentioned are actually quite different. One is about adjusting to the norms of the higher education sector, particularly in publishing. A lot of academics, especially those working within a single discipline, assume that publishing works the same way across all fields—that you can create a universal set of rules that apply to everyone. But that’s simply not the case.

    For example, the concept of a first author doesn’t exist in every discipline. Likewise, in some fields, the principal investigator (PI) is always listed at the end of the author list, while in others, that’s not the norm.

    One of the biggest challenges we faced was in fields dealing with big science—large-scale research projects involving hundreds or even thousands of contributors. In high-energy physics, for example, a decision was made back in the 1920s: everyone who participates in an experiment above a certain threshold is listed as an author in alphabetical order. They even have a committee to determine who meets that threshold—because, of course, it’s academia, so there has to be a committee.

    But when you have 5,000 authors on a single paper, that distorts the rankings. So we had to develop a mechanism to handle that. Ideally, we’d have a single metric that works in all cases—just like in physics, where we don’t use one model of gravity in some situations and a different one in others. But sometimes, you have to make exceptions. Now, Times Higher Education is moving toward more sophisticated bibliometric measures to address these challenges in a better way.

    The second issue you mentioned—the voting behavior in reputation rankings—is completely different because it involves inappropriate behavior. And this kind of issue isn’t just institutional; sometimes, it’s at the individual academic level.

    We’re seeing this in publishing as well, where some academics are somehow producing over 200 articles a year. Impressive productivity, sure—but is it actually viable? In cases like this, the approach has to be different. It’s about identifying and penalizing misbehavior.

    At the same time, we don’t want to be judge and jury. It’s difficult because, often, we can see statistical patterns that strongly suggest something is happening, but we don’t always have a smoking gun. So our goal is always to be as fair and equitable as possible while putting safeguards in place to maintain the integrity of the rankings.

    Alex Usher: Duncan, you hinted at this earlier, but I want to turn now to the Impact Rankings. This was the big initiative you introduced at Times Higher Education. Tell us about the genesis of those rankings—where did the idea come from? Why focus on impact? And why the SDGs?

    Duncan Ross: It actually didn’t start out as a sustainability-focused project. The idea came from my colleague, Phil Baty, who had always been concerned that the World University Rankings didn’t include enough measurement around technology transfer.

    So, we set out to collect data from universities on that—looking at things like income from consultancy and university spin-offs. But when the data came back, it was a complete mess—totally inconsistent and fundamentally unusable. So, I had to go back to the drawing board.

    That’s when I came across SDG 9—Industry, Innovation, and Infrastructure. I looked at it and thought, This is interesting. It was compelling because it provided an external framework.

    One of the challenges with ranking models is that people always question them—Is this really a good model for excellence? But with an external framework like the SDGs, if someone challenges it, I can just point to the United Nations and say, Take it up with them.

    At that point, I had done some data science work and was familiar with the tank problem, so I jokingly assumed there were probably 13 to 18 SDGs out there. (That’s a data science joke—those don’t land well 99% of the time.) But as it turned out, there were more SDGs, and exploring them was a real light bulb moment.

    The SDGs provided a powerful framework for understanding the most positive role universities can play in the world today. We all know—well, at least those of us outside the U.S. know—that we’re facing a climate catastrophe. Higher education has a crucial role to play in addressing it.

    So, the question became: How can we support that? How can we measure it? How can we encourage better behavior in this incredibly important sector?

    Alex Usher: The Impact Rankings are very different in that roughly half of the indicators—about 240 to 250 across all 17 SDGs—aren’t naturally quantifiable. Instead, they’re based on stories.

    For example, an institution might submit, This is how we combat organized crime or This is how we ensure our food sourcing is organic. These responses are scored based on institutional submissions.

    Now, I don’t know exactly how Times Higher Education evaluates them, but there has to be a system in place. How do you ensure that these institutional answers—maybe 120 to 130 per institution at most—are scored fairly and consistently when you’re dealing with hundreds of institutions?

    Duncan Ross: Well, I can tell you that this year, over 2,500 institutions submitted approved data—so it’s grown significantly. One thing to clarify, though, is that these aren’t written-up reports like the UK’s Teaching Excellence Framework, where universities can submit an essay justifying why they didn’t score as well as expected—what I like to call the dog ate my student statistics paper excuse. Instead, we ask for evidence of the work institutions have done. That evidence can take different forms—sometimes policies, sometimes procedures, sometimes concrete examples of their initiatives. The scoring process itself is relatively straightforward. First, we give some credit if an institution says they’re doing something. Then, we assess the evidence they provide to determine whether it actually supports their claim. But the third and most important part is that institutions receive extra credit if the evidence is publicly available. If you publish your policies or reports, you open yourself up to scrutiny, which adds accountability.

    A great example is SDG 5—Gender Equality—specifically around gender pay equity. If an institution claims to have a policy on gender pay equity, we check: Do you publish it? If so, and you’re not actually living up to it, I’d hope—and expect—that women within the institution will challenge you on it. That’s part of the balancing mechanism in this process.

    Now, how do we evaluate all this? Until this year, we relied on a team of assessors. We brought in people, trained them, supported them with our regular staff, and implemented a layer of checks—such as cross-referencing responses against previous years. Ultimately, human assessors were making the decisions.

    This year, as you might expect, we’re introducing AI to assist with the process. AI helps us filter out straightforward cases, leaving the more complex ones for human assessors. It also ensures that we don’t run into assessor fatigue. When someone has reviewed 15 different answers to the same question from various universities, the process can get a bit tedious—AI helps mitigate that.

    Alex Usher: Yeah, it’s like that experiment with Israeli judges, right? You don’t want to be the last case before lunch—you get a much harsher sentence if the judge is making decisions on an empty stomach. I imagine you must have similar issues to deal with in rankings.

    I’ve been really impressed by how enthusiastically institutions have embraced the Impact Rankings. Canadian universities, in particular, have really taken to them. I think we had four of the top ten last year and three of the top ten this year, which is rare for us. But the uptake hasn’t been as strong—at least not yet—in China or the United States, which are arguably the two biggest national players in research-based university rankings. Maybe that’s changing this year, but why do you think the reception has been so different in different parts of the world? And what does that say about how different regions view the purpose of universities?

    Duncan Ross: I think there’s definitely a case that different countries and regions have different approaches to the SDGs. In China, as you might expect, interest in the rankings depends on how well they align with current Communist Party priorities. You could argue that something similar happens in the U.S. The incoming administration has made it fairly clear that SDG 10 (Reduced Inequalities) and SDG 5 (Gender Equality) are not going to be top priorities—probably not SDG 1 (No Poverty), either. So in some cases, a country’s level of engagement reflects its political landscape.

    But sometimes, it also reflects the economic structure of the higher education system itself. In the U.S., where universities rely heavily on high tuition fees, rankings are all about attracting students. And the dominant ranking in that market is U.S. News & World Report—the 600-pound gorilla. If I were in their position, I’d focus on that, too, because it’s the ranking that brings in applications.

    In other parts of the world, though, rankings serve a different purpose. This ties back to our earlier discussion about different priorities in different regions. Take Indonesia, for example. There are over 4,000 universities in the country. If you’re an institution like ITS (Institut Teknologi Sepuluh Nopember), how do you stand out? How do you show that you’re different from other universities?

    For them, the Impact Rankings provided an opportunity to showcase the important work they’re doing—work that might not have been recognized in traditional rankings. And that’s something I’m particularly proud of with the Impact Rankings. Unlike the World University Rankings or the Teaching Rankings, it’s not just the usual suspects at the top.

    One of my favorite examples is Western Sydney University. It’s a fantastic institution. If you’re ever in Sydney, take the train out there. Stay on the train—it’s a long way from the city center—but go visit them. Look at the incredible work they’re doing, not just in sustainability but also in their engagement with Aboriginal and Torres Strait Islander communities. They’re making a real impact, and I’m so pleased that we’ve been able to raise the profile of institutions like Western Sydney—universities that might not otherwise get the recognition they truly deserve.

    Alex Usher: But you’re still left with the problem that many institutions that do really well in research rankings have, in effect, boycotted the Impact Rankings—simply because they’re not guaranteed to come first.

    A lot of them seem to take the attitude of, Why would I participate in a ranking if I don’t know I’ll be at the top?

    I know you initially faced that issue with LERU (the League of European Research Universities), and I guess the U.S. is still a challenge, with lower participation numbers.

    Do you think Times Higher Education will eventually crack that? It’s a tough nut to crack. I mean, even the OECD ran into the same resistance—it was the same people saying, Rankings are terrible, and we don’t want better ones.

    What’s your take on that?

    Duncan Ross: Well, I’ve got a brief anecdote about this whole rankings boycott approach. There’s one university—I’m not going to name them—that made a very public statement about withdrawing from the Times Higher Education World University Rankings. And just to be clear, that’s something you can do, because participation is voluntary—not all rankings are. So, they made this big announcement about pulling out. Then, about a month later, we got an email from their graduate studies department asking, Can we get a copy of your rankings? We use them to evaluate applicants for interviews. So, there’s definitely some odd thinking at play here. But when it comes to the Impact Rankings, I’m pretty relaxed about it. Sure, it would be nice to have Oxford or Harvard participate—but MIT does, and they’re a reasonably good school, I hear. Spiderman applied there, so it’s got to be decent. The way I see it, the so-called top universities already have plenty of rankings they can focus on. If we say there are 300 top universities in the world, what about the other 36,000 institutions?

    Alex Usher: I just want to end on a slightly different note. While doing some background research for this interview, I came across your involvement in DataKind—a data charity that, if I understand correctly, you founded. I’ve never heard of a data charity before, and I find the idea fascinating—intriguing enough that I’m even thinking about starting one here. Tell us about DataKind—what does it do?

    Duncan Ross: Thank you! So, DataKind was actually founded in the U.S. by Jake Porway. I first came across it at one of the early big data conferences—O’Reilly’s Strata Conference in New York. Jake was talking about how data could be used for good, and at the time, I had been involved in leadership roles at several UK charities. It was a light bulb moment. I went up to Jake and said, Let me start a UK equivalent! At first, he was noncommittal—he said, Yeah, sure… someday. But I just kept nagging him until eventually, he gave in and said yes. Together with an amazing group of people in the UK—Fran Bennett, Caitlin Thaney, and Stuart Townsend—we set up DataKind UK.

    The concept is simple: we often talk about how businesses—whether in telecom, retail, or finance—use data to operate more effectively. The same is true in the nonprofit sector. The difference is that banks can afford to hire data scientists—charities often can’t. So, DataKind was created to connect data scientists with nonprofit organizations, allowing them to volunteer their skills.

    Of course, for this to work, a charity needs a few things:

    1. Leadership willing to embrace data-driven decision-making.
    2. A well-defined problem that can be analyzed.
    3. Access to data—because without data, we can’t do much.

    Over the years, DataKind—both in the U.S. and worldwide—has done incredible work. We’ve helped nonprofits understand what their data is telling them, improve their use of resources, and ultimately, do more for the communities they serve. I stepped down from DataKind UK in 2020 because I believe that the true test of something successful is whether it can continue to thrive without you. And I’m happy to say it’s still going strong. I kind of hope the Impact Rankings continue to thrive at Times Higher Education now that I’ve moved on as well.

    Alex Usher: Yeah. Well, thank you for joining us today, Duncan.

    Duncan Ross: It’s been a pleasure.

    And it just remains for me to thank our excellent producers, Sam Pufek and Tiffany MacLennan. And you, our viewers, listeners, and readers for joining us today. If you have any questions or comments about today’s episode, please don’t hesitate to get in touch with us at [email protected]. Worried about missing an episode of the World of Higher Education? There’s a solution for that. Go to our YouTube page and subscribe. Next week, our guest will be Jim Dickinson. He’s an associate editor at Wonkhe in the UK, and he’s also maybe the world expert on comparative student politics. And he joins us to talk about the events in Serbia where the student movement is challenging the populist government of the day. Bye for now.

    *This podcast transcript was generated using an AI transcription service with limited editing. Please forgive any errors made through this service.

    Source link

  • Which colleges gained R1 status under the revamped Carnegie Classifications?

    Which colleges gained R1 status under the revamped Carnegie Classifications?

    This audio is auto-generated. Please let us know if you have feedback.

    The American Council on Education on Thursday released the latest list of research college designations under the revamped Carnegie Classifications, labeling 187 institutions as Research 1 institutions. 

    The coveted R1 designation is given to universities with the highest levels of research activity. The number of colleges designated as R1 institutions in 2025 rose 28% compared with the last time the list was released, in 2022. 

    The updated list of research institutions is the first that ACE and the Carnegie Foundation for the Advancement of Teaching have released since they updated their methodology for the classifications. The new methodology was created in part to simplify a previously complex formula that left institutions fearful about losing their status. 

    “We hope this more modernized version of Carnegie Classifications will answer more questions in a more sophisticated way about institutions and their position in the ecosystem and will allow decisions to be made much more precisely by philanthropists, by governments, and by students and families,” Ted Mitchell, president of ACE, told Higher Ed Dive.

    Thirty-two institutions moved from the second-highest research level in 2022 — commonly called Research 2, or R2 — to the R1 designation. That group includes Howard University, a historically Black college in Washington, D.C. The private college — which announced a record $122 million in research grants and contracts in 2022 — is the only HCBU with the designation. 

    Other colleges that moved from R2 to R1 include public institutions like the University of Idaho, University of North Dakota, University of Rhode Island, University of Vermont and the University of Wyoming, along with private colleges like Lehigh University, in Pennsylvania, and American University, in Washington, D.C. 

    Just one institution dropped from R1 to R2 status — the University of Alabama in Huntsville. 

    For universities to achieve R1 status under the new methodology, they must spend an average of $50 million on research and development each year and award 70 or more research doctorates. 

    R2 institutions need to spend an average of $5 million per year on research and award 20 or more research doctorates. 

    Previously, the methodology was more complex. In order to keep the R1 and R2 groups of equal size, classifiers determined the line between the two designations with each cycle. They also looked at 10 different variables to determine R1 status. 

    “The previous methodology was opaque and I think led institutions to spend more time trying to figure out what the methodology actually was, perhaps distracting them from more important work,” said Timothy Knowles, president of the Carnegie Foundation. “Institutions that are close to the bar will just be much clearer about what they have to do to get over the bar.”

    The latest crop of R1 institutions have each spent $748.4 million on research and development on average annually from fiscal 2021 to fiscal 2023. During that same period, they have annually awarded an average of 297 research doctorates. 

    Texas led the list of states with the most R1 institutions, with 16. California and New York followed closely behind with 14 and 12 institutions, respectively. 

    The 139 R2 institutions on this latest list each spent an average of $55.17 million annually over three years on research and development — just beating the threshold for R1 status. However, they produced an average of only 49 research doctorates per year. 

    This year also marks the first time the classifications have included a new designation: RCU, or research colleges and universities. The new category is meant to recognize institutions that regularly conduct research but don’t confer doctoral degrees. These colleges only need to spend more than an average of $2.5 million annually on research to be recognized as RCUs. 

    This year, 215 colleges and universities have reached that status. Many are master’s- and baccalaureate-level institutions. And some are four-year colleges with a “special focus,” such as medical schools and centers. 

    Two tribal colleges have also reached RCU status: Diné College, in Arizona, and Northwest Indian College, in Washington.

    Source link

  • $50K threshold for college foreign gift reporting passes House panel

    $50K threshold for college foreign gift reporting passes House panel

    This audio is auto-generated. Please let us know if you have feedback.

    Dive Brief: 

    • The House Committee on Education and Workforce voted Wednesday to advance a bill that would require colleges to report gifts and contracts valued at $50,000 or more from most foreign countries. 
    • That would lower the requirement from the current threshold of $250,000. Republicans argued that the bill, called the Deterrent Act, is needed to prevent foreign influence in higher education. 
    • The bill would also lower the reporting threshold to $0 for the “countries of concern” as determined by the U.S. Code or the secretary of education, which include China, Russia, Iran and North Korea. The proposal would bar colleges from entering into contracts with those countries unless the secretary of education issues them a waiver and renews it each year. 

    Dive Insight: 

    The Deterrent Act would amend Section 117 of the Higher Education Act, which oversees foreign gift and contract reporting requirements for colleges. Republicans on the education committee argued the measure is needed to provide more transparency. 

    A fact sheet on the bill included concerns about foreign adversaries stealing secrets from American universities and influencing student behavior. 

    The fact sheet also referenced a 2024 congressional report that accused two high-profile research institutions — University of California, Berkeley and Georgia Institute of Technology — of failing to meet the current reporting requirements through their partnerships with Chinese universities. 

    “Higher education is one of the jewels of American society,” said Rep. Michael Baumgartner, a Washington Republican who co-sponsored the bill, on Wednesday. “Unfortunately, it’s also an area that is often under attack and used by malign influences to subvert American interests.”

    Under the bill, colleges would face fines and the loss of their Title IV federal student aid funding if they didn’t comply with the reporting requirements. 

    Democrats largely voiced opposition to the measure. 

    However, they focused many of their complaints Wednesday on the Trump administration’s recent moves that have sparked outcry in the higher education sector, including cuts to the National Institutes of Health’s funding for indirect research costs. A judge temporarily blocked the cuts earlier this week. 

    “I understand and I do appreciate the intent behind the Deterrent Act, but if House Republicans and the president truly want to lead in America, and they want America to lead, they must permanently reverse the cuts to the National Institutes of Health,” said Rep. Lucy McBath, a Democrat from Georgia. “It’s not enough for us just to wait outside for the lawsuits to protect folks back home from damaging and possibly illegal orders like these.”

    Virginia Rep. Bobby Scott, the top-ranking Democrat on the committee, struck a similar tone, referencing the Trump administration’s goal of eliminating the U.S. Department of Education. 

    He noted that the authors of Project 2025 — a wide-ranging conservative policy blueprint for the Republican administration — aim to dismantle the Education Department with the stated goal of having the federal government be less involved in schools. 

    “The argument rests on the perception that the federal government is too involved in our schools, and here we are marking up bills that would give the Department of Education more responsibility to impose unfunded mandates and interfere with local schools,” Scott said. 

    The House committee advanced several other bills Wednesday, including those that would allow schools to serve whole milk and aim to end Chinese influence in K-12 education. 

    House lawmakers previously passed the Deterrent Act in 2023, though it was never put to a vote in the Senate. At the time, the American Council on Education and other higher ed groups opposed the bill, objecting in part to the large fines colleges could face for noncompliance. 

    The Republican-backed bill may face better odds in this congressional session, now that the GOP also controls the Senate and the White House.

    Source link

  • A Major Tool of Nonviolence

    A Major Tool of Nonviolence

    The Higher Education Inquirer has always promoted nonviolence for progressive social change.  Strikes and boycotts are two of the most powerful tools when used well. These tools must be part of a strategy that may take years and even generations. Civil rights for African Americans and other people of color have been ongoing for centuries. Women have never been granted full rights by the US Constitution (the Equal Rights Amendment only passed in 38 states). And the class struggle is never ending. When we study these struggles, we must be aware of the truth that no single person can make a great difference, but groups in concert, can. How will you be part of a movement? And what burden are you willing to carry?     

    Hidden Women of the Montgomery Bus Boycott

    Source link

  • Academic freedom doesn’t require college neutrality

    Academic freedom doesn’t require college neutrality

    Amid public campaigns urging universities to commit to “institutional neutrality,” the American Association of University Professors released a lengthy statement Wednesday saying that the term “conceals more than it reveals.”

    The statement, approved by the AAUP’s elected national council last month, says it continues the national scholarly group’s long commitment to emphasizing “the complexity of the issues involved” in the neutrality debate. “Institutional neutrality is neither a necessary condition for academic freedom nor categorically incompatible with it,” it says.

    The push for universities to adopt institutional neutrality policies ramped up as administrators struggled over what, if anything, to say about Hamas’s Oct. 7, 2023, attack on Israelis and Israel’s swift retaliation in the Gaza Strip.

    The AAUP statement notes that “institutional neutrality” has varied meanings and that actions—not just words—convey a point of view. For instance, some argue that to be neutral, institutions shouldn’t adjust their financial investments for anything other than maximizing returns. But the AAUP says that “no decision concerning a university’s investment strategy counts as neutral.”

    The AAUP asserts that by taking any position on divestment—which many campus protesters have asked for—a university “makes a substantive decision little different from its decision to issue a statement that reflects its values.”

    “A university’s decision to speak, or not; to limit its departments or other units from speaking; to divest from investments that conflict with its mission; or to limit protest in order to promote other forms of speech are all choices that might either promote or inhibit academic freedom and thus must be made with an eye to those practical results, not to some empty conception of neutrality,” the AAUP statement says. “The defense of academic freedom has never been a neutral act.”

    Steven McGuire, Paul and Karen Levy Fellow in Campus Freedom at the conservative American Council of Trustees and Alumni, called the statement “another unhelpful document from the AAUP.”

    “Institutional neutrality is a long-standing principle that can both protect academic freedom and help colleges and universities to stick to their academic missions,” McGuire told Inside Higher Ed. “It’s critical that institutional neutrality be enforced not only to protect individual faculty members on campus, but also to help to depoliticize American colleges and universities at a time when they have become overpoliticized” and are viewed as biased.

    Source link

  • Trump administration rescinds Title IX guidance on athlete pay

    Trump administration rescinds Title IX guidance on athlete pay

    The Trump administration announced Wednesday it is rolling back guidance issued in the final days of the Biden administration that said payments to college athletes through revenue-sharing agreements or from name, image and likeness deals “must be made proportionately available to male and female athletes.”

    Republicans quickly criticized the guidance and called for its rescission, arguing that mandating equal pay between men and women’s sports could cause some colleges to cut athletics programs.

    Under Title IX, colleges must provide “substantially proportionate” financial assistance to male and female athletes, though it wasn’t clear until the Biden guidance whether that requirement applied to NIL deals or revenue-sharing agreements. A settlement reached in the House v. NCAA case would require colleges to share revenue with athletes starting in the 2025–26 academic year and provide back pay.

    The Trump administration said the guidance was “overly burdensome” and “profoundly unfair.”

    “Enacted over 50 years ago, Title IX says nothing about how revenue-generating athletics programs should allocate compensation among student athletes,” acting assistant secretary for civil rights Craig Trainor said in a statement. “The claim that Title IX forces schools and colleges to distribute student-athlete revenues proportionately based on gender equity considerations is sweeping and would require clear legal authority to support it.”

    A federal judge is set to sign off on the House settlement later this spring. Several athletes have objected to the plan, including some groups of women athletes who argue the revenue won’t be shared equitably and will primarily benefit men who play football and basketball.

    Source link

  • Senate holds confirmation hearing for Linda McMahon

    Senate holds confirmation hearing for Linda McMahon

    President Trump’s pick to lead the Education Department, Linda McMahon, will appear today before a key Senate committee to kick off the confirmation process.

    The hearing comes at a tumultuous time for the Education Department and higher education, and questions about the agency’s future will likely dominate the proceedings, which kick off at 10 a.m. The Inside Higher Ed team will have live updates throughout the morning and afternoon, so follow along.

    McMahon has been through the wringer of a confirmation hearing before, as she was appointed to lead the Small Business Administration during Trump’s first term. But this time around the former wrestling CEO can expect tougher questions, particularly from Democrats, as the Trump administration has already taken a number of unprecedented, controversial and, at times, seemingly unconstitutional actions in just three short weeks.

    Our live coverage of the hearing will kick off at 9:15 a.m. In the meantime, you can read more about McMahon, the latest at the department and what to expect below:

    will embed youtube


    Source link