Blog

  • How foreign aid helps the country that gives it

    How foreign aid helps the country that gives it

    In international relations, nation states vie for power and security. They do this through diplomacy and treaties which establish how they should behave towards one another.

    If those agreements don’t work, states resort to violence to achieve their goals. 

    In addition to diplomatic relations and wars, states can also project their interests through soft power. Dialogue, compromise and consensus are all part of soft power. 

    Foreign assistance, where one country provides money, goods or services to another without implicitly asking for anything in return, is a form of soft power because it can make a needy nation dependent or beholden to a wealthier one. 

    In 2023, the U.S. government had obligations to provide some $68 billion in foreign aid spread across more than 10 agencies to more than 200 countries. The U.S. Agency for International Development (USAID) alone spent $38 billion in 2023 and operated in 177 different countries. 

    Spreading good will through aid

    USAID has been fundamental to projecting a positive image of the United States throughout the world. In an essay published by the New York Times, Samantha Power, the former administrator of USAID, described how nearly $20 billion of its assistance went to health programs that combat such things as malaria, tuberculosis, H.I.V./AIDS and infectious disease outbreaks, and humanitarian assistance to respond to emergencies and help stabilize war-torn regions.

    Other USAID investments, she wrote, give girls access to education and the ability to enter the work force. 

    When President John F. Kennedy established USAID in 1961, he said in a message to Congress: “We live at a very special moment in history. The whole southern half of the world — Latin America, Africa, the Middle East, and Asia — are caught up in the adventures of asserting their independence and modernizing their old ways of life. These new nations need aid in loans and technical assistance just as we in the northern half of the world drew successively on one another’s capital and know-how as we moved into industrialization and regular growth.”

    He acknowledged that the reason for the aid was not totally humanitarian.

    “For widespread poverty and chaos lead to a collapse of existing political and social structures which would inevitably invite the advance of totalitarianism into every weak and unstable area,” Kennedy said. “Thus our own security would be endangered and our prosperity imperilled. A program of assistance to the underdeveloped nations must continue because the nation’s interest and the cause of political freedom require it.” 

    Investing in emerging democracies

    The fear of communism was obvious in 1961. The motivation behind U.S. foreign assistance is always both humanitarian and political; the two can never be separated. 

    Today, the United States is competing with China and its Belt and Road Initiative (BRI) for global influence through foreign assistance. The BRI was started by Chinese President Xi Jinping in 2023. It is global, with its Silk Road Economic Belt connecting China with Central Asia and Europe, and the 21st Century Maritime Silk Road connecting China with South and Southeast Asia and Africa and Latin America.

    Most of the projects involve infrastructure improvement — things like roads and bridges, mass transit and power supplies — and increased trade and investment. 

    As of 2013, 149 countries have joined BRI. In the first half of 2023, a total of $43 billion in agreements were signed. Because of its lending policy, BRI lending has made China the world’s largest debt collector.

    While the Chinese foreign assistance often requires repayment, the United States has dispensed money through USAID with no direct feedback. Trump thinks that needs to be changed. “We get tired of giving massive amounts of money to countries that hate us, don’t we?” he said on 27 January 2024. 

    Returns are hard to see.

    Traditionally, U.S. foreign assistance, unlike the Chinese BRI, has not been transactional. There is no guarantee that what is spent will have a direct impact. Soft power is not quantifiable. Questions of image, status and prestige are hard to measure.

    Besides helping millions of people, Samantha Power gave another more transactional reason for supporting U.S. foreign assistance.

    “USAID has generated vast stores of political capital in the more than 100 countries where it works, making it more likely that when the United States makes hard requests for other leaders — for example — to send peace keepers to a war zone, to help a U.S. company enter a new market or to extradite a criminal to the United States — they say yes,” she wrote.

    Trump is known as a “transactional” president, but even this argument has not convinced him to continue to support USAID. 

    Soft power is definitely not part of his vision of the art of the deal.


     

    Three questions to consider:

    1. What is “foreign aid”?
    2. Why would one country give money to another without asking for anything in return?
    3. Do you think wealthier nations should be obliged to help poorer countries?


     

    Source link

  • Data, Decisions, and Disruptions: Inside the World of University Rankings

    Data, Decisions, and Disruptions: Inside the World of University Rankings

    University rankings are pretty much everywhere. Though the earliest university rankings in the U. S. date back to the early 1900s and the modern ones from the 1983 debut of the U. S. News and World Report rankings. The kind of rankings we tend to talk about now, international or global rankings, really only date back to 2003 with the creation of the Shanghai Academic Rankings of World Universities.

    Over the decade that followed that first publication, a triumvirate emerged at the top of the rankings pyramid. The Shanghai Rankings, run by a group of academics at the Shanghai Jiao Tong University, the Quacquarelli Symonds, or QS Rankings, and the Times Higher Education’s World University Rankings. Between them, these three rankings producers, particularly QS and Times Higher, created a bewildering array of new rankings, dividing the world up by geography and field of study, mainly based on metrics relating to research.

    Joining me today is the former Chief Data Officer of the Times Higher Education Rankings, Duncan Ross. He took over those rankings at a time when it seemed like the higher education world might be running out of things to rank. Under his tutelage, though, the Times Impact Rankings, which are based around the 17 UN Sustainable Development Goals, were developed. And that’s created a genuinely new hierarchy in world higher education, at least among those institutions who choose to submit to the rankings.  

    My discussion with Duncan today covers a wide range of topics related to his time at THE. But the most enjoyable bit by far, for me anything, was the bit about the genesis of the impact rankings. Listen a bit, especially when Duncan talks about how the Impact Rankings came about because the THE realized that its industry rankings weren’t very reliable. Fun fact, around that time I got into a very public debate with Phil Beatty, the editor of the Times Higher, on exactly that subject. Which means maybe, just maybe, I’m kind of a godparent to the impact rankings. But that’s just me. You may well find other points of interest in this very compelling interview. Let’s hand things over to Duncan.


    The World of Higher Education Podcast
    Episode 3.20 | Data, Decisions, and Disruptions: Inside the World of University Rankings 

    Transcript

    Alex Usher: So, Duncan, let’s start at the beginning. I’m curious—what got you into university rankings in the first place? How did you end up at Times Higher Education in 2015?

    Duncan Ross: I think it was almost by chance. I had been working in the tech sector for a large data warehousing company, which meant I was working across many industries—almost every industry except higher education. I was looking for a new challenge, something completely different. Then a friend approached me and mentioned a role that might interest me. So I started talking to Times Higher Education, and it turned out it really was a great fit.

    Alex Usher: So when you arrived at Times Higher in 2015, the company already had a pretty full set of rankings products, right? They had the global rankings, the regional rankings, which I think started around 2010, and then the subject or field of study rankings came a couple of years later. When you looked at all of that, what did you think? What did you feel needed to be improved?

    Duncan Ross: Well, the first thing I had to do was actually bring all of that production in-house. At the time, even though Times Higher had rankings, they were produced by Clarivate—well, Thomson Reuters, as it was then. They were doing a perfectly good job, but if you’re not in control of the data yourself, there’s a limit to what you can do with it.

    Another key issue was that, while it looked like Times Higher had many rankings, in reality, they had just one: the World University Rankings. The other rankings were simply different cuts of that same data. And even within the World University Rankings, only 400 universities were included, with a strong bias toward Europe and North America. About 26 or 27 percent of those institutions were from the U.S., which didn’t truly reflect the global landscape of higher education.

    So the challenge was: how could we broaden our scope and truly capture the world of higher education beyond the usual suspects? And beyond that, were there other aspects of universities that we could measure, rather than just relying on research-centered metrics? There are good reasons why international rankings tend to focus on research—it’s the most consistent data available—but as you know, it’s certainly not the only way to define excellence in higher education.

    Alex Usher: Oh, yeah. So how did you address the issue of geographic diversity? Was it as simple as saying, “We’re not going to limit it to 400 universities—we’re going to expand it”? I think the ranking now includes over a thousand institutions, right? I’ve forgotten the exact number.

    Duncan Ross: It’s actually around 2,100 or so, and in practice, the number is even larger because, about two years ago, we introduced the concept of reporter institutions. These are institutions that haven’t yet met the criteria to be fully ranked but are already providing data.

    The World University Rankings have an artificial limit because there’s a threshold for participation based on the number of research articles published. That threshold is set at 1,000 papers over a five-year period. If we look at how many universities could potentially meet that criterion, it’s probably around 3,000, and that number keeps growing. But even that is just a fraction of the higher education institutions worldwide. There are likely 30,000—maybe even 40,000—higher education institutions globally, and that’s before we even consider community colleges.

    So, expanding the rankings was about removing artificial boundaries. We needed to reach out to institutions in parts of the world that weren’t well represented and think about higher education in a way that wasn’t so Anglo-centric.

    One of the biggest challenges I’ve encountered—and it’s something people inevitably fall into—is that we tend to view higher education through the lens of our own experiences. But higher education doesn’t function the same way everywhere. It’s easy to assume that all universities should look like those in Canada, the U.S., or the UK—but that’s simply not the case.

    To improve the rankings, we had to be open-minded, engage with institutions globally, and carefully navigate the challenges of collecting data on such a large scale. As a result, Times Higher Education now has data on around 5,000 to 6,000 universities—a huge step up from the original 400. Still, it’s just a fraction of the institutions that exist worldwide.

    Alex Usher: Well, that’s exactly the mission of this podcast—to get people to think beyond an Anglo-centric view of the world. So I take your point that, in your first couple of years at Times Higher Education, most of what you were doing was working with a single set of data and slicing it in different ways.

    But even with that, collecting data for rankings isn’t simple, right? It’s tricky, you have to make a lot of decisions, especially about inclusion—what to include and how to weight different factors. And I think you’ve had to deal with a couple of major issues over the years—one in your first few years and another more recently.

    One was about fractional counting of articles, which I remember went on for quite a while. There was that big surge of CERN-related articles, mostly coming out of Switzerland but with thousands of authors from around the world, which affected the weighting. That led to a move toward fractional weighting, which in theory equalized things a bit—but not everyone agreed.

    More recently, you’ve had an issue with voting, right? What I think was called a cartel of voters in the Middle East, related to the reputation rankings. Can you talk a bit about how you handle these kinds of challenges?

    Duncan Ross: Well, I think the starting point is that we’re always trying to evaluate things in a fair and consistent way. But inevitably, we’re dealing with a very noisy and messy world.

    The two cases you mentioned are actually quite different. One is about adjusting to the norms of the higher education sector, particularly in publishing. A lot of academics, especially those working within a single discipline, assume that publishing works the same way across all fields—that you can create a universal set of rules that apply to everyone. But that’s simply not the case.

    For example, the concept of a first author doesn’t exist in every discipline. Likewise, in some fields, the principal investigator (PI) is always listed at the end of the author list, while in others, that’s not the norm.

    One of the biggest challenges we faced was in fields dealing with big science—large-scale research projects involving hundreds or even thousands of contributors. In high-energy physics, for example, a decision was made back in the 1920s: everyone who participates in an experiment above a certain threshold is listed as an author in alphabetical order. They even have a committee to determine who meets that threshold—because, of course, it’s academia, so there has to be a committee.

    But when you have 5,000 authors on a single paper, that distorts the rankings. So we had to develop a mechanism to handle that. Ideally, we’d have a single metric that works in all cases—just like in physics, where we don’t use one model of gravity in some situations and a different one in others. But sometimes, you have to make exceptions. Now, Times Higher Education is moving toward more sophisticated bibliometric measures to address these challenges in a better way.

    The second issue you mentioned—the voting behavior in reputation rankings—is completely different because it involves inappropriate behavior. And this kind of issue isn’t just institutional; sometimes, it’s at the individual academic level.

    We’re seeing this in publishing as well, where some academics are somehow producing over 200 articles a year. Impressive productivity, sure—but is it actually viable? In cases like this, the approach has to be different. It’s about identifying and penalizing misbehavior.

    At the same time, we don’t want to be judge and jury. It’s difficult because, often, we can see statistical patterns that strongly suggest something is happening, but we don’t always have a smoking gun. So our goal is always to be as fair and equitable as possible while putting safeguards in place to maintain the integrity of the rankings.

    Alex Usher: Duncan, you hinted at this earlier, but I want to turn now to the Impact Rankings. This was the big initiative you introduced at Times Higher Education. Tell us about the genesis of those rankings—where did the idea come from? Why focus on impact? And why the SDGs?

    Duncan Ross: It actually didn’t start out as a sustainability-focused project. The idea came from my colleague, Phil Baty, who had always been concerned that the World University Rankings didn’t include enough measurement around technology transfer.

    So, we set out to collect data from universities on that—looking at things like income from consultancy and university spin-offs. But when the data came back, it was a complete mess—totally inconsistent and fundamentally unusable. So, I had to go back to the drawing board.

    That’s when I came across SDG 9—Industry, Innovation, and Infrastructure. I looked at it and thought, This is interesting. It was compelling because it provided an external framework.

    One of the challenges with ranking models is that people always question them—Is this really a good model for excellence? But with an external framework like the SDGs, if someone challenges it, I can just point to the United Nations and say, Take it up with them.

    At that point, I had done some data science work and was familiar with the tank problem, so I jokingly assumed there were probably 13 to 18 SDGs out there. (That’s a data science joke—those don’t land well 99% of the time.) But as it turned out, there were more SDGs, and exploring them was a real light bulb moment.

    The SDGs provided a powerful framework for understanding the most positive role universities can play in the world today. We all know—well, at least those of us outside the U.S. know—that we’re facing a climate catastrophe. Higher education has a crucial role to play in addressing it.

    So, the question became: How can we support that? How can we measure it? How can we encourage better behavior in this incredibly important sector?

    Alex Usher: The Impact Rankings are very different in that roughly half of the indicators—about 240 to 250 across all 17 SDGs—aren’t naturally quantifiable. Instead, they’re based on stories.

    For example, an institution might submit, This is how we combat organized crime or This is how we ensure our food sourcing is organic. These responses are scored based on institutional submissions.

    Now, I don’t know exactly how Times Higher Education evaluates them, but there has to be a system in place. How do you ensure that these institutional answers—maybe 120 to 130 per institution at most—are scored fairly and consistently when you’re dealing with hundreds of institutions?

    Duncan Ross: Well, I can tell you that this year, over 2,500 institutions submitted approved data—so it’s grown significantly. One thing to clarify, though, is that these aren’t written-up reports like the UK’s Teaching Excellence Framework, where universities can submit an essay justifying why they didn’t score as well as expected—what I like to call the dog ate my student statistics paper excuse. Instead, we ask for evidence of the work institutions have done. That evidence can take different forms—sometimes policies, sometimes procedures, sometimes concrete examples of their initiatives. The scoring process itself is relatively straightforward. First, we give some credit if an institution says they’re doing something. Then, we assess the evidence they provide to determine whether it actually supports their claim. But the third and most important part is that institutions receive extra credit if the evidence is publicly available. If you publish your policies or reports, you open yourself up to scrutiny, which adds accountability.

    A great example is SDG 5—Gender Equality—specifically around gender pay equity. If an institution claims to have a policy on gender pay equity, we check: Do you publish it? If so, and you’re not actually living up to it, I’d hope—and expect—that women within the institution will challenge you on it. That’s part of the balancing mechanism in this process.

    Now, how do we evaluate all this? Until this year, we relied on a team of assessors. We brought in people, trained them, supported them with our regular staff, and implemented a layer of checks—such as cross-referencing responses against previous years. Ultimately, human assessors were making the decisions.

    This year, as you might expect, we’re introducing AI to assist with the process. AI helps us filter out straightforward cases, leaving the more complex ones for human assessors. It also ensures that we don’t run into assessor fatigue. When someone has reviewed 15 different answers to the same question from various universities, the process can get a bit tedious—AI helps mitigate that.

    Alex Usher: Yeah, it’s like that experiment with Israeli judges, right? You don’t want to be the last case before lunch—you get a much harsher sentence if the judge is making decisions on an empty stomach. I imagine you must have similar issues to deal with in rankings.

    I’ve been really impressed by how enthusiastically institutions have embraced the Impact Rankings. Canadian universities, in particular, have really taken to them. I think we had four of the top ten last year and three of the top ten this year, which is rare for us. But the uptake hasn’t been as strong—at least not yet—in China or the United States, which are arguably the two biggest national players in research-based university rankings. Maybe that’s changing this year, but why do you think the reception has been so different in different parts of the world? And what does that say about how different regions view the purpose of universities?

    Duncan Ross: I think there’s definitely a case that different countries and regions have different approaches to the SDGs. In China, as you might expect, interest in the rankings depends on how well they align with current Communist Party priorities. You could argue that something similar happens in the U.S. The incoming administration has made it fairly clear that SDG 10 (Reduced Inequalities) and SDG 5 (Gender Equality) are not going to be top priorities—probably not SDG 1 (No Poverty), either. So in some cases, a country’s level of engagement reflects its political landscape.

    But sometimes, it also reflects the economic structure of the higher education system itself. In the U.S., where universities rely heavily on high tuition fees, rankings are all about attracting students. And the dominant ranking in that market is U.S. News & World Report—the 600-pound gorilla. If I were in their position, I’d focus on that, too, because it’s the ranking that brings in applications.

    In other parts of the world, though, rankings serve a different purpose. This ties back to our earlier discussion about different priorities in different regions. Take Indonesia, for example. There are over 4,000 universities in the country. If you’re an institution like ITS (Institut Teknologi Sepuluh Nopember), how do you stand out? How do you show that you’re different from other universities?

    For them, the Impact Rankings provided an opportunity to showcase the important work they’re doing—work that might not have been recognized in traditional rankings. And that’s something I’m particularly proud of with the Impact Rankings. Unlike the World University Rankings or the Teaching Rankings, it’s not just the usual suspects at the top.

    One of my favorite examples is Western Sydney University. It’s a fantastic institution. If you’re ever in Sydney, take the train out there. Stay on the train—it’s a long way from the city center—but go visit them. Look at the incredible work they’re doing, not just in sustainability but also in their engagement with Aboriginal and Torres Strait Islander communities. They’re making a real impact, and I’m so pleased that we’ve been able to raise the profile of institutions like Western Sydney—universities that might not otherwise get the recognition they truly deserve.

    Alex Usher: But you’re still left with the problem that many institutions that do really well in research rankings have, in effect, boycotted the Impact Rankings—simply because they’re not guaranteed to come first.

    A lot of them seem to take the attitude of, Why would I participate in a ranking if I don’t know I’ll be at the top?

    I know you initially faced that issue with LERU (the League of European Research Universities), and I guess the U.S. is still a challenge, with lower participation numbers.

    Do you think Times Higher Education will eventually crack that? It’s a tough nut to crack. I mean, even the OECD ran into the same resistance—it was the same people saying, Rankings are terrible, and we don’t want better ones.

    What’s your take on that?

    Duncan Ross: Well, I’ve got a brief anecdote about this whole rankings boycott approach. There’s one university—I’m not going to name them—that made a very public statement about withdrawing from the Times Higher Education World University Rankings. And just to be clear, that’s something you can do, because participation is voluntary—not all rankings are. So, they made this big announcement about pulling out. Then, about a month later, we got an email from their graduate studies department asking, Can we get a copy of your rankings? We use them to evaluate applicants for interviews. So, there’s definitely some odd thinking at play here. But when it comes to the Impact Rankings, I’m pretty relaxed about it. Sure, it would be nice to have Oxford or Harvard participate—but MIT does, and they’re a reasonably good school, I hear. Spiderman applied there, so it’s got to be decent. The way I see it, the so-called top universities already have plenty of rankings they can focus on. If we say there are 300 top universities in the world, what about the other 36,000 institutions?

    Alex Usher: I just want to end on a slightly different note. While doing some background research for this interview, I came across your involvement in DataKind—a data charity that, if I understand correctly, you founded. I’ve never heard of a data charity before, and I find the idea fascinating—intriguing enough that I’m even thinking about starting one here. Tell us about DataKind—what does it do?

    Duncan Ross: Thank you! So, DataKind was actually founded in the U.S. by Jake Porway. I first came across it at one of the early big data conferences—O’Reilly’s Strata Conference in New York. Jake was talking about how data could be used for good, and at the time, I had been involved in leadership roles at several UK charities. It was a light bulb moment. I went up to Jake and said, Let me start a UK equivalent! At first, he was noncommittal—he said, Yeah, sure… someday. But I just kept nagging him until eventually, he gave in and said yes. Together with an amazing group of people in the UK—Fran Bennett, Caitlin Thaney, and Stuart Townsend—we set up DataKind UK.

    The concept is simple: we often talk about how businesses—whether in telecom, retail, or finance—use data to operate more effectively. The same is true in the nonprofit sector. The difference is that banks can afford to hire data scientists—charities often can’t. So, DataKind was created to connect data scientists with nonprofit organizations, allowing them to volunteer their skills.

    Of course, for this to work, a charity needs a few things:

    1. Leadership willing to embrace data-driven decision-making.
    2. A well-defined problem that can be analyzed.
    3. Access to data—because without data, we can’t do much.

    Over the years, DataKind—both in the U.S. and worldwide—has done incredible work. We’ve helped nonprofits understand what their data is telling them, improve their use of resources, and ultimately, do more for the communities they serve. I stepped down from DataKind UK in 2020 because I believe that the true test of something successful is whether it can continue to thrive without you. And I’m happy to say it’s still going strong. I kind of hope the Impact Rankings continue to thrive at Times Higher Education now that I’ve moved on as well.

    Alex Usher: Yeah. Well, thank you for joining us today, Duncan.

    Duncan Ross: It’s been a pleasure.

    And it just remains for me to thank our excellent producers, Sam Pufek and Tiffany MacLennan. And you, our viewers, listeners, and readers for joining us today. If you have any questions or comments about today’s episode, please don’t hesitate to get in touch with us at [email protected]. Worried about missing an episode of the World of Higher Education? There’s a solution for that. Go to our YouTube page and subscribe. Next week, our guest will be Jim Dickinson. He’s an associate editor at Wonkhe in the UK, and he’s also maybe the world expert on comparative student politics. And he joins us to talk about the events in Serbia where the student movement is challenging the populist government of the day. Bye for now.

    *This podcast transcript was generated using an AI transcription service with limited editing. Please forgive any errors made through this service.

    Source link

  • Which colleges gained R1 status under the revamped Carnegie Classifications?

    Which colleges gained R1 status under the revamped Carnegie Classifications?

    This audio is auto-generated. Please let us know if you have feedback.

    The American Council on Education on Thursday released the latest list of research college designations under the revamped Carnegie Classifications, labeling 187 institutions as Research 1 institutions. 

    The coveted R1 designation is given to universities with the highest levels of research activity. The number of colleges designated as R1 institutions in 2025 rose 28% compared with the last time the list was released, in 2022. 

    The updated list of research institutions is the first that ACE and the Carnegie Foundation for the Advancement of Teaching have released since they updated their methodology for the classifications. The new methodology was created in part to simplify a previously complex formula that left institutions fearful about losing their status. 

    “We hope this more modernized version of Carnegie Classifications will answer more questions in a more sophisticated way about institutions and their position in the ecosystem and will allow decisions to be made much more precisely by philanthropists, by governments, and by students and families,” Ted Mitchell, president of ACE, told Higher Ed Dive.

    Thirty-two institutions moved from the second-highest research level in 2022 — commonly called Research 2, or R2 — to the R1 designation. That group includes Howard University, a historically Black college in Washington, D.C. The private college — which announced a record $122 million in research grants and contracts in 2022 — is the only HCBU with the designation. 

    Other colleges that moved from R2 to R1 include public institutions like the University of Idaho, University of North Dakota, University of Rhode Island, University of Vermont and the University of Wyoming, along with private colleges like Lehigh University, in Pennsylvania, and American University, in Washington, D.C. 

    Just one institution dropped from R1 to R2 status — the University of Alabama in Huntsville. 

    For universities to achieve R1 status under the new methodology, they must spend an average of $50 million on research and development each year and award 70 or more research doctorates. 

    R2 institutions need to spend an average of $5 million per year on research and award 20 or more research doctorates. 

    Previously, the methodology was more complex. In order to keep the R1 and R2 groups of equal size, classifiers determined the line between the two designations with each cycle. They also looked at 10 different variables to determine R1 status. 

    “The previous methodology was opaque and I think led institutions to spend more time trying to figure out what the methodology actually was, perhaps distracting them from more important work,” said Timothy Knowles, president of the Carnegie Foundation. “Institutions that are close to the bar will just be much clearer about what they have to do to get over the bar.”

    The latest crop of R1 institutions have each spent $748.4 million on research and development on average annually from fiscal 2021 to fiscal 2023. During that same period, they have annually awarded an average of 297 research doctorates. 

    Texas led the list of states with the most R1 institutions, with 16. California and New York followed closely behind with 14 and 12 institutions, respectively. 

    The 139 R2 institutions on this latest list each spent an average of $55.17 million annually over three years on research and development — just beating the threshold for R1 status. However, they produced an average of only 49 research doctorates per year. 

    This year also marks the first time the classifications have included a new designation: RCU, or research colleges and universities. The new category is meant to recognize institutions that regularly conduct research but don’t confer doctoral degrees. These colleges only need to spend more than an average of $2.5 million annually on research to be recognized as RCUs. 

    This year, 215 colleges and universities have reached that status. Many are master’s- and baccalaureate-level institutions. And some are four-year colleges with a “special focus,” such as medical schools and centers. 

    Two tribal colleges have also reached RCU status: Diné College, in Arizona, and Northwest Indian College, in Washington.

    Source link

  • $50K threshold for college foreign gift reporting passes House panel

    $50K threshold for college foreign gift reporting passes House panel

    This audio is auto-generated. Please let us know if you have feedback.

    Dive Brief: 

    • The House Committee on Education and Workforce voted Wednesday to advance a bill that would require colleges to report gifts and contracts valued at $50,000 or more from most foreign countries. 
    • That would lower the requirement from the current threshold of $250,000. Republicans argued that the bill, called the Deterrent Act, is needed to prevent foreign influence in higher education. 
    • The bill would also lower the reporting threshold to $0 for the “countries of concern” as determined by the U.S. Code or the secretary of education, which include China, Russia, Iran and North Korea. The proposal would bar colleges from entering into contracts with those countries unless the secretary of education issues them a waiver and renews it each year. 

    Dive Insight: 

    The Deterrent Act would amend Section 117 of the Higher Education Act, which oversees foreign gift and contract reporting requirements for colleges. Republicans on the education committee argued the measure is needed to provide more transparency. 

    A fact sheet on the bill included concerns about foreign adversaries stealing secrets from American universities and influencing student behavior. 

    The fact sheet also referenced a 2024 congressional report that accused two high-profile research institutions — University of California, Berkeley and Georgia Institute of Technology — of failing to meet the current reporting requirements through their partnerships with Chinese universities. 

    “Higher education is one of the jewels of American society,” said Rep. Michael Baumgartner, a Washington Republican who co-sponsored the bill, on Wednesday. “Unfortunately, it’s also an area that is often under attack and used by malign influences to subvert American interests.”

    Under the bill, colleges would face fines and the loss of their Title IV federal student aid funding if they didn’t comply with the reporting requirements. 

    Democrats largely voiced opposition to the measure. 

    However, they focused many of their complaints Wednesday on the Trump administration’s recent moves that have sparked outcry in the higher education sector, including cuts to the National Institutes of Health’s funding for indirect research costs. A judge temporarily blocked the cuts earlier this week. 

    “I understand and I do appreciate the intent behind the Deterrent Act, but if House Republicans and the president truly want to lead in America, and they want America to lead, they must permanently reverse the cuts to the National Institutes of Health,” said Rep. Lucy McBath, a Democrat from Georgia. “It’s not enough for us just to wait outside for the lawsuits to protect folks back home from damaging and possibly illegal orders like these.”

    Virginia Rep. Bobby Scott, the top-ranking Democrat on the committee, struck a similar tone, referencing the Trump administration’s goal of eliminating the U.S. Department of Education. 

    He noted that the authors of Project 2025 — a wide-ranging conservative policy blueprint for the Republican administration — aim to dismantle the Education Department with the stated goal of having the federal government be less involved in schools. 

    “The argument rests on the perception that the federal government is too involved in our schools, and here we are marking up bills that would give the Department of Education more responsibility to impose unfunded mandates and interfere with local schools,” Scott said. 

    The House committee advanced several other bills Wednesday, including those that would allow schools to serve whole milk and aim to end Chinese influence in K-12 education. 

    House lawmakers previously passed the Deterrent Act in 2023, though it was never put to a vote in the Senate. At the time, the American Council on Education and other higher ed groups opposed the bill, objecting in part to the large fines colleges could face for noncompliance. 

    The Republican-backed bill may face better odds in this congressional session, now that the GOP also controls the Senate and the White House.

    Source link

  • A Major Tool of Nonviolence

    A Major Tool of Nonviolence

    The Higher Education Inquirer has always promoted nonviolence for progressive social change.  Strikes and boycotts are two of the most powerful tools when used well. These tools must be part of a strategy that may take years and even generations. Civil rights for African Americans and other people of color have been ongoing for centuries. Women have never been granted full rights by the US Constitution (the Equal Rights Amendment only passed in 38 states). And the class struggle is never ending. When we study these struggles, we must be aware of the truth that no single person can make a great difference, but groups in concert, can. How will you be part of a movement? And what burden are you willing to carry?     

    Hidden Women of the Montgomery Bus Boycott

    Source link

  • Trump administration rescinds Title IX guidance on athlete pay

    Trump administration rescinds Title IX guidance on athlete pay

    The Trump administration announced Wednesday it is rolling back guidance issued in the final days of the Biden administration that said payments to college athletes through revenue-sharing agreements or from name, image and likeness deals “must be made proportionately available to male and female athletes.”

    Republicans quickly criticized the guidance and called for its rescission, arguing that mandating equal pay between men and women’s sports could cause some colleges to cut athletics programs.

    Under Title IX, colleges must provide “substantially proportionate” financial assistance to male and female athletes, though it wasn’t clear until the Biden guidance whether that requirement applied to NIL deals or revenue-sharing agreements. A settlement reached in the House v. NCAA case would require colleges to share revenue with athletes starting in the 2025–26 academic year and provide back pay.

    The Trump administration said the guidance was “overly burdensome” and “profoundly unfair.”

    “Enacted over 50 years ago, Title IX says nothing about how revenue-generating athletics programs should allocate compensation among student athletes,” acting assistant secretary for civil rights Craig Trainor said in a statement. “The claim that Title IX forces schools and colleges to distribute student-athlete revenues proportionately based on gender equity considerations is sweeping and would require clear legal authority to support it.”

    A federal judge is set to sign off on the House settlement later this spring. Several athletes have objected to the plan, including some groups of women athletes who argue the revenue won’t be shared equitably and will primarily benefit men who play football and basketball.

    Source link

  • Academic freedom doesn’t require college neutrality

    Academic freedom doesn’t require college neutrality

    Amid public campaigns urging universities to commit to “institutional neutrality,” the American Association of University Professors released a lengthy statement Wednesday saying that the term “conceals more than it reveals.”

    The statement, approved by the AAUP’s elected national council last month, says it continues the national scholarly group’s long commitment to emphasizing “the complexity of the issues involved” in the neutrality debate. “Institutional neutrality is neither a necessary condition for academic freedom nor categorically incompatible with it,” it says.

    The push for universities to adopt institutional neutrality policies ramped up as administrators struggled over what, if anything, to say about Hamas’s Oct. 7, 2023, attack on Israelis and Israel’s swift retaliation in the Gaza Strip.

    The AAUP statement notes that “institutional neutrality” has varied meanings and that actions—not just words—convey a point of view. For instance, some argue that to be neutral, institutions shouldn’t adjust their financial investments for anything other than maximizing returns. But the AAUP says that “no decision concerning a university’s investment strategy counts as neutral.”

    The AAUP asserts that by taking any position on divestment—which many campus protesters have asked for—a university “makes a substantive decision little different from its decision to issue a statement that reflects its values.”

    “A university’s decision to speak, or not; to limit its departments or other units from speaking; to divest from investments that conflict with its mission; or to limit protest in order to promote other forms of speech are all choices that might either promote or inhibit academic freedom and thus must be made with an eye to those practical results, not to some empty conception of neutrality,” the AAUP statement says. “The defense of academic freedom has never been a neutral act.”

    Steven McGuire, Paul and Karen Levy Fellow in Campus Freedom at the conservative American Council of Trustees and Alumni, called the statement “another unhelpful document from the AAUP.”

    “Institutional neutrality is a long-standing principle that can both protect academic freedom and help colleges and universities to stick to their academic missions,” McGuire told Inside Higher Ed. “It’s critical that institutional neutrality be enforced not only to protect individual faculty members on campus, but also to help to depoliticize American colleges and universities at a time when they have become overpoliticized” and are viewed as biased.

    Source link

  • How colleges engage faculty in student career development

    How colleges engage faculty in student career development

    It’s spring semester and a junior-level student just knocked on a professor’s office door. The student has dropped by to talk about summer internships; they’re considering a career in the faculty member’s discipline, but they feel nervous and a little unsure about navigating the internship hunt. They’ve come to the faculty member for insight, advice and a dash of encouragement that they’re on the right track.

    A fall 2023 survey by the National Association for Colleges and Employers found 92 percent of faculty members have experienced this in the past year—a student in their disciplinary area asking for career advice. But only about half of instructors say they’re very comfortable advising students on careers in their discipline, showing a gap between lived experiences and preparation for navigating these interactions.

    Career readiness is a growing undercurrent in higher education—driven in part by outside pressure from families and students to provide a return on investment for the high costs of tuition—but also pushed by an evolving job market and employers who attribute less weight to a college major or degree for early talent hiring.

    With a fraction of students engaging with the career center on campus, delivering career development and professional skills to all students can seem like an impossible task.

    Enter the career champion.

    The career champion is a trained, often full-time, faculty member who has completed professional development that equipped them to guide students through higher education to their first (or even second) role.

    The career champion identifies the enduring skills students will develop in their syllabus and provides opportunities for learners to articulate career readiness in the context of class projects, presentations or experiential learning.

    The career champion also shepherds their peers along the career integration path, creating a discipleship of industry-cognizant professors who freely give internship advice, make networking connections and argue for the role of higher education in student development.

    Over the past decade, college and university leaders have anointed and empowered champions among their faculty, and some institutions have even built layered models of train-the-trainer roles and responsibilities. The work creates a culture of academics who are engaged and responding to workforce demands, no longer shuffling students to career services for support but creating a through line of careers in the classroom.

    The Recipe for Success

    Career champion initiatives serve a three-pronged approach for institutional goals for career readiness.

    First, such efforts provide much-needed professional development for the faculty member. NACE’s survey of faculty members found 38 percent of respondents said they need professional development in careers and career preparation to improve how they counsel students.

    “Historically, faculty are not incentivized to do this work, nor are they trained to do anything really career readiness–related,” says Punita Bhansali, associate professor at Queensborough Community College and a CUNY Career Success Leadership Fellow. “This program was born out of the idea, let’s create a structured model where faculty get rewarded … they get recognized and they receive support for doing this work.”

    Growing attention has been placed on the underpreparation of faculty to talk socio-emotional health with learners. In the same way, faculty are lacking the tools to talk about jobs and life after college. “As they’re thinking about careers in their own work, [faculty] are used to being experts in the field, and being an expert in careers feels daunting,” explains Brenna Gomez, director of career integration at Oregon State University.

    Second, these programs get ahead of student questions about the value of liberal arts or their general education courses by identifying career skills in class early and often. This works in tandem with shifting general education requirements at some institutions, such as the University of Montana, which require faculty members to establish career as a learning outcome for courses.

    “We knew we weren’t going to move [the] career-readiness needle by being the boutique program that you sometimes go to,” says Brian Reed, associate vice provost for student success at Montana. “If we really want to have an inescapable impact, we’ve gotta get into the classroom.”

    A May 2024 Student Voice survey by Inside Higher Ed and Generation Lab found 92 percent of college students believe professors are at least partly responsible for some form of career development—such as sharing how careers in their field are evolving or helping students find internships—in the classroom. Just 8 percent of respondents selected “none of the above” in the list of career development–related tasks that faculty may be responsible for.

    “It’s getting faculty on board with [and] being very clear about the skills that a student is developing that do have applicability beyond that one class and for their career and their life,” says Richard Hardy, associate dean for undergraduate education of the college of arts and sciences at Indiana University Bloomington. IU Bloomington’s College of Arts and Sciences also requires competencies in the curriculum.

    Third, career champions are exceptionally valuable at changing the culture among their peers. “Champion” becomes a literal title when faculty interact with and influence colleagues.

    “That’s a general best practice if you’re looking to develop faculty in any way: to figure out who your champions are to start, and then let faculty talk to faculty,” says Niesha Taylor, director of career readiness at the National Association of Colleges and Employers. “They have the same interests in hand, they speak the same language and they can really help each other get on board in a more authentic way than sometimes an administrator could,” adds Taylor, a former career champion for the City University of New York system.

    Becoming an Expert

    Each institution takes a slightly different approach to how they mint their faculty champions.

    Oregon State University launched its Career Champion program in 2020 as part of a University Innovation Alliance project to better connect learners with career information in the classroom, explains Gomez.

    The six-week program is led by the Career Development Center and runs every academic term, engaging a cohort of five to 15 faculty members and instructors who belong to various colleges and campuses at OSU. During one session and a few hours of work independently, program participants complete collaborative course redesign projects and education around inequities in career development.

    By the end of the quarter, faculty have built three deliverables for their course: a NACE competency career map, a syllabus statement that includes at least one competency and a new or revamped lecture activity or assignment that highlights career skills.

    After completing the program, professors can join a community of practice and receive a monthly newsletter from the career center to continuously engage in career education through research, events or resources for students.

    IU Bloomington and Virginia Commonwealth University are among institutions that have created workshop series for faculty to identify or embed competencies in their courses, as well.

    Training the Trainer

    Creating change on the academic side of a college is a historically difficult task for an administrator, because it can be like leading a horse to water. Getting faculty engaged across campus is the goal, but starting with the existing cheerleaders is the first step, campus leaders say.

    3 Tips for Launching Faculty Development

    For institutions looking to create a champion program, or something similar, NACE’s Taylor encourages administrators to:

    • Get leadership on board
    • Make the professional development process meaningful through incentives or compensation
    • Provide ways for professors to share their stories after completing the work.

    To launch career champions at the University of Montana, Reed relied on the expertise and support of instructors who had previously demonstrated enthusiasm.

    “We found our biggest champions who always come to the programs that we do, who traditionally invited us into the classroom. When we said, ‘Hey, you’ve been a fantastic partner. Would you want to be part of this inaugural cohort?’ they said, ‘Absolutely.’ And so that’s who we went with,” Reed says.

    Montana’s faculty development in careers has expanded to have three tiers of involvement: a community of practice, career champions and Faculty Career Fellows, who Reed jokes are the Green Beret unit of careers. Fellows collaborate with a curriculum coach to research and implement additional events, training and other projects for instructors.

    After completing the championship program, some returned to continue education and involvement, Reed says. “We had [faculty] that wanted to come back and do it again. They wanted to stay part of the community.”

    The City University of New York selects a handful of Career Success Leadership Fellows annually who drive integration, innovation and research around careers across the system. In addition to training other faculty members, each fellow is charged taking the model to present and share with other campuses, as with their own projects for advancing career development growth.

    With added time and energy comes an added institutional financial investment in career fellows. Montana’s fellows receive a $1,000 stipend for their work, drawn from funds donated by the Dennis and Phyllis Washington Foundation, and CUNY’s fellows receive $2,000 for the academic year.

    The Heart Behind It All

    For some of these engaged professors, their involvement is tied to their experiences as learners. That junior knocking on their office door asking about internships? That was them once upon a time, and they wished their professor had the answers.

    “All of us have gone through undergrad. We know that we’ve taken some courses where it’s like, ‘Why did I take that?’ and the professor is just in their heads,” says Jason Hendrickson, professor of English at LaGuardia Community College, part of the CUNY system, and a Career Success Leadership Fellow.

    “[Career champions] are the people who, when you talk to them, they all say, ‘I wish I had had this in my undergrad experience … I didn’t know this stuff existed, the depth of the programs and services that we offer,’” Reed says.

    Faculty are also starting to feel the heat, particularly those belonging to disciplines under attack in mainstream media or that have historically less strong occupational outcomes for learners.

    “I think over time, what’s happened is faculty have seen how this is actually beneficial … from the point of view of our disciplines and allowing students to see why engaging with the liberal arts is actually hugely beneficial for career and life,” IU Bloomington’s Hardy says.

    “The question that keeps me up at night is how to retain college students,” says Bhansali of CUNY’s Queensborough Community College. “The data is bleak in terms of college retention, and each faculty needs to show how the content and skills covered in their classroom are going to help students in the future, regardless of the job they choose.”

    Sometimes instructors can feel overwhelmed by the programs, trying to incorporate eight competencies into their courses, for example, or feeling as if they have to be an expert in all things career related.

    “They can feel like, ‘How can I do all of this?’ And it’s really not any one faculty [member]’s job or any one class’s job. It has to be systemic in the college,” NACE’s Taylor says.

    The best part of the job is seeing students successfully land that job in their field. Sebastian Alvarado, a biology faculty member at Queens College and CUNY Career Success Leadership Fellow, ran into former students from his genetics class at a specialist’s appointment he had.

    “It feels really rewarding—they were really there as a result of their bio major training,” Alvarado says. “When we see students getting placements in their jobs, it feels like we’re doing what we’re supposed to do.”

    Looking Ahead

    There remain some faculty members who push back against careerism in higher education—and some who remain undersupported or -resourced to take on this work, Alvarado points out—but programs have been growing slowly but surely, driven in part by champions.

    Since launching, IU Bloomington has had over 300 faculty complete the program in the College of Arts and Sciences, Hardy says.

    Montana interacted with 235 faculty members in workshops and events in the past year, which Reed expects to only increase as more faculty members rework curriculum for general education requirements.

    OSU has had 105 participants since 2020, and the College of Liberal Arts established a commitment to train at least two faculty members in each school to be Career Champions in their strategic plan for 2023–2028, Gomez says. Campus leaders are also creating professional development for academic advisers and student-employee supervisors to train other student-facing practitioners in career integration.

    Furthering this work requires additional partnerships and collaboration between faculty members and career services staff, Taylor says, where traditionally there are not relationships due to institutional silos.

    “I’m always—and my career success team, they’re always—scanning for these partnerships, and we use our network of existing people to sort of make referrals,” Reed says. “It’s a benevolent Ponzi scheme.”

    We bet your colleague would like this article, too. Send them this link to subscribe to our weekday newsletter on Student Success.

    Source link

  • Senate holds confirmation hearing for Linda McMahon

    Senate holds confirmation hearing for Linda McMahon

    President Trump’s pick to lead the Education Department, Linda McMahon, will appear today before a key Senate committee to kick off the confirmation process.

    The hearing comes at a tumultuous time for the Education Department and higher education, and questions about the agency’s future will likely dominate the proceedings, which kick off at 10 a.m. The Inside Higher Ed team will have live updates throughout the morning and afternoon, so follow along.

    McMahon has been through the wringer of a confirmation hearing before, as she was appointed to lead the Small Business Administration during Trump’s first term. But this time around the former wrestling CEO can expect tougher questions, particularly from Democrats, as the Trump administration has already taken a number of unprecedented, controversial and, at times, seemingly unconstitutional actions in just three short weeks.

    Our live coverage of the hearing will kick off at 9:15 a.m. In the meantime, you can read more about McMahon, the latest at the department and what to expect below:

    will embed youtube


    Source link

  • Ignore the noise – university is overwhelmingly worth it for most

    Ignore the noise – university is overwhelmingly worth it for most

    New data from UCAS shows the number of 18 year old applications to undergraduate courses for autumn 2025 continues to climb, including from young people from disadvantaged backgrounds.

    The slight dip in the rate of applications can be explained in part by changes around how students engage with the application cycle. Year-on-year we see decision making happening later in the admissions cycle. There is a clear disconnect between the discourse around universities and the demand for them, where the long-term trend is up.

    Universities have long been used as political currency, despite being a core part of young people’s aspirations in the UK. It is not uncommon to hear influential politicians and commentators argue against the value of a degree, even though they generally have degrees themselves. If the government has its sights set on sparking economic growth and creating opportunity across society, encouraging more people to go to university is the answer, with jobs requiring higher education expected to see the most growth in the next ten years, according to analysis from Skills England.

    There has been a tremendous amount of progress in helping people from a wider range of backgrounds go to university in recent years, and this is reflected in new UCAS data. Applications from young people from areas with low participation in higher education is at its highest level in recent years. Not only does this afford thousands more young people opportunities that they might never otherwise have had, it also has huge economic benefits for them, and their communities.

    Reaping the rewards of participation

    However, there is much further to go. You are still about twice as likely to go to university if you are from the most affluent backgrounds, compared to the least affluent. This can’t be right, particularly as the data shows that the benefits of university are especially strong for people from disadvantaged backgrounds.

    Graduates who received free school meals earlier in life get a big earnings boost by going to university. On average, they’ll earn over a third more than non-graduates from the same background by the age of 31. And the benefits go beyond salary – universities play an important role in tackling economic inactivity and unemployment, one of the government’s key battles. Overall, graduates are far less likely to be claiming benefits, nearly three times less likely to be economically inactive, and over one and a half times more likely to be employed than non-graduates.

    The data shows that there is still a great deal of progress to be made in closing the regional participation gap. In London, 58 per cent of 18-year-olds applied to university; in the North East this was only 32 per cent. In Wales, the participation rate has been going backwards. This is a huge missed opportunity for the nation.

    If the government were to work with universities, colleges and schools to ensure all young people have the same educational opportunities, we’d see more people in work and more people able to adapt as the labour market changes around them, earning higher wages and filling the jobs being created in exciting new sectors of the economy.

    And, given that graduates are statistically more likely to enjoy better health, we’d probably have a healthier population too. In the UK we’re lucky to have exceptional universities in every region of the UK, and producing and attracting more graduates to these areas could significantly boost regional productivity.

    That’s not to say that everyone should want to, or needs to go to university to have a successful career or spark regional growth, but graduates’ skills make a vital contribution to local economies. Regions with high numbers of graduates perform better economically, and these benefits spillover to non-graduates. All eight growth-driving sectors identified by the government, including clean energy and the creative industries, are dependent on a bigger supply of graduates to expand. Last year, these industries reported having a 50 per cent higher proportion of graduates than in the UK workforce as a whole.

    The bottom line

    For the many young people who don’t know exactly what they want to do in life, going to university can be the difference between gaining skills and experience that will set them up for life or falling into economic inactivity. Despite what a great deal of headlines will tell you, universities are essential to young people’s prospects in this country, and the new application data shows that young people feel this too.

    As well as the huge economic benefits for wider society, university has huge appeal for individuals. It’s an opportunity to gain career skills, immerse yourself in a subject you enjoy and meet lifelong friends. And above all, thanks to the UK’s diverse offering of institutions and courses, including academic and vocational styles, it’s a realistic goal for most people. Perhaps, in a world where young people are being increasingly discouraged about the future ahead, university represents something more optimistic, and that’s why they continue to want to go there.

    Source link