Graduate student enrollment is increasingly critical to the overall enrollment health for universities. As demographic changes make it harder to grow traditional undergraduate enrollment, institutions will need graduate student population growth to fill in those gaps.
The good news is that the graduate student market is growing. According to National Student Clearinghouse data, graduate enrollment reached an all-time high of 3.2 million in fall 2024, with a 3.3% increase over the year before.
However, in order to compete for these students, you need to understand their motivations, influences, and concerns when it comes to their selection of a higher educational institution. To dig into these issues, RNL surveyed 1,400 prospective and enrolled graduate students on a wide range of issues that relate to their decision to pursue graduate study. Here are some of the key findings that enrollment managers need to know.
What is their primary motivation to study?
It’s no surprise that today’s students are career-oriented, but it’s clear that advancing their current career is the top driver, with 74% of our participants listing that as their primary motivation to study.
What does this mean for us as practitioners in higher education? It’s critical to not only highlight career-related information, but also to make sure that information and outcomes are very easy to find. In another finding from our report, 90% of respondents indicated that it’s important for program pages to provide specific and easy-to-access information on careers related to their field.
What influences graduate students to consider graduate study?
As you can see here, these decisions are largely self-motivated even if the reasons to pursue grad study are career-oriented. I find it interesting that these are not more employer-driven, especially when it comes to continuing degrees. However, it still shows that the majority of graduate students are self-motivated, intrinsic learners who see graduate study as a way to improve their lives.
What are the most important program features to prospective graduate students?
For our survey respondents, format flexibility was the feature that was cited as most important, followed closely by available specializations. This is interesting, as the respondents cited modality, course format, and specializations, and then flexible scheduling. This could be a reflection of the growing number of Gen Z students (those under 29) who make up 56% of the graduate student population according to the fall 2023 IPEDS snapshot. This change in student age demographic emphasizes the importance of offering and designing those programs for multiple delivery types and really meeting those students where they are.
What are the main concerns of graduate students?
I don’t think anyone will be shocked that cost is a concern for 60% of graduate students. But half of our respondents also cited balancing responsibilities as a primary concern. This is again, not shocking considering the vast majority of our participants said they worked full-time. While fewer than 20% cited ROI uncertainty, that still represents 1 in 5 of our survey takers. The bottom line is that institutions need to directly address these pain points when they conduct outreach with students. Mitigating some of those concerns right away can help students feel more comfortable in the process and be more likely to enroll in, and ultimately complete their programs.
What will inhibit a graduate student from applying to a program?
Finally, we asked our survey respondents which common requirements would potentially dissuade them from applying to a program.
As you can see, 1 in 3 students cited letters of recommendation and essays/personal statements. This is not to say that institutions should remove these requirements, but be mindful if your program really needs them in the evaluation process. Similarly, for items such as transcripts, look for ways to make it easier for transcripts to be submitted or gathered to remove the burden from students—and a potential barrier from applying to your program.
Read the full report for even more insights
These findings represent a fraction of what you will find in the 2025 Graduate Student Recruitment Report. It’s packed with findings on the channels graduate students use to search for schools, how they use search engines for research, which digital ads they click on, and much more.
Talk with our graduate and online enrollment experts
Ask for a free consultation with us. We’ll help you assess your market and develop the optimal strategies for your prospective graduate students and online learners.
Retention is not what you do. It is the outcome of what you do.
It’s that time of year when retention committees, student success professionals, and leadership teams across the country calculate the retention rate for the fall 2024 cohort and compare it with their previous years’ outcomes. Some campuses have undoubtedly stayed the same, others decreased, and some increased, but the overall conversation is usually about how “it” can be done better for the fall 2025 class.
Let’s talk about “it” for a minute. Many of you have heard the message that two of our founders, Lee Noel and Randi Levitz, and the student success professionals who have followed in their footsteps, have shared for several decades: Retention is not what you do. “It” is the outcome of what you do. “It” is the result of quality faculty, staff, programs and services. As you consider improvements to your efforts which will impact the fall 2025 entering class and beyond, keep in mind the following three student retention strategies and practices.
1. Assess college student retention outcomes completely
The first strategy RNL recommends is a comprehensive outcomes assessment. All colleges and universities compute a retention rate at this time of year because it has to be submitted via the IPEDS system as part of the federal requirements. But many schools go above and beyond what is required and compute other retention rates to inform planning purposes. For example, at what rates did you retain special populations or students enrolled in programs designed to improve student success? In order to best understand what contributed to the overall retention rate, other outcomes have to be assessed as well. For instance, how many students persisted but didn’t progress (successfully completed their courses)? Before you finalize the college student retention strategies for your fall 2025 students, be sure you know how your 2024 students persisted and progressed so that strategies can be developed for the year ahead.
2. Know what worked and what didn’t
The second strategy we recommend is to consider what worked well during the previous year and what didn’t. Many of us have been in situations where we continue to do the same thing and expect different results, which has been called insanity! (Fun fact, this quote is often attributed to Einstein, but according to Google, was not actually said by him!) A common example would be the academic advising model. RNL has many years of data which show that academic advising is one of the most important college student retention strategies. But just doing what you have always done may not still be working with today’s college students. Advising is an area which needs constant attention for appropriate improvements. Here are a few questions for you to consider: Does your academic advising model, its standards of practice, and outcomes assessment reveal that your students are academically progressing by taking the courses needed for completion? Can you identify for each of your advisees an expected graduation date (which is one of the expected outcomes of advising)? Establishing rich relationships between advisors and advisees, providing a quality academic advising experience, can ultimately manage and improve the institution’s graduation rate.
3. Don’t limit your scope of activity
Once you have assessed the 2024 class outcomes and the quality of your programs and services, RNL encourages you to think differently about how you will develop college student retention strategies that will impact the 2025 class. Each college has an attrition curve, or a distribution of students with their likelihood of being retained. The attrition curve, like any normal distribution, will show which students are least and most likely to retain and will reveal the majority of students under the curve. See the example below:
As you consider your current activities, you may find that many of your programs are designed for the students at the tail end of the curve (section A above) or to further support the students who are already likely to persist (section B). Institutions set goals to increase retention rates but then limit the scope of students they are impacting. To have the best return on retention strategies, consider how you can target support to the largest group of students in the middle (section C) who are open to influence on whether they stay or leave, based on what you do or don’t do for them, especially during their first term and their first year at your school.
Onward for the year ahead
RNL congratulates those of you who have achieved your retention goals for the 2024 cohort. You certainly must have done some things right and must have had student retention strategies that were effective. For those of you who are looking for new directions in planning, consider the three practices outlined above.
And if you aren’t currently one of the hundreds of institutions already working with RNL, you may want to implement one or more of the RNL student success tools to support your efforts: the RNL motivational survey instruments to identify those students who are most dropout prone and most receptive to assistance, the RNL student retention data analytics to identify the unique factors that contribute to persistence at your institution, and the RNL satisfaction-priorities surveys that inform decision making and resource allocation across your campus population. RNL can provide support in all of these areas along with on-going consulting services to further direct and guide retention practices that can make a difference in your enrollment numbers and the success of both your students and your institution. Contact me to learn more in any of these areas.
Note: Thanks to my former colleague Tim Culver for the original development of this content.
Ask for a complimentary consultation with our student success experts
What is your best approach to increasing student retention and completion? Our experts can help you identify roadblocks to student persistence and maximize student progression. Reach out to set up a time to talk.
One of the great promises of higher education is that it acts as a social ladder—one that allows students from low-income backgrounds to climb up and reach a higher social and economic status. No one, I think, ever believed it was a guaranteed social leveler, or that children from wealthier families didn’t have an easier time succeeding after college because of their own, and their family’s, social and cultural capital. But most people, in America at least, believed that on the whole it played a positive role in increasing social mobility.
Over the past couple of decades, though, particularly as student debt has increased, people have begun to wonder if this story about social mobility through college is actually true. That’s a hard question to answer definitively. Data sets that track both student origins and outcomes are few and far between, and it’s also difficult to work out what social mobility used to look like in a quantifiable sense.
However, this summer economist Sarah Quincy of Vanderbilt University and Zach Bleemer of Princeton University released a paper called Changes in the College Mobility Pipeline Since 1900. This paper overcame some of those data limitations and took a long, more than century-long, look at the relationship between social mobility and college attendance.
What they found was sobering. Not only is higher education no longer helping poor students catch up with wealthier ones, but in fact the sector’s role as a social elevator actually stopped working almost 80 years ago. This seemed like a perfect story for the podcast, and so we invited Zach Bleemer—who you may remember from an episode on race-conscious admissions about two years ago—to join us to discuss it.
This discussion ranges from the methodological to the expositional. Where does the data come from? What does the data really mean? And are there alternative explanations for the paper’s surprising findings? But enough from me—let’s hear from Zach.
The World of Higher Education Podcast Episode 4.4 | The Widening Gap: Income, College, and Opportunity with Zachary Bleemer
Transcript
Alex Usher (AU): Zach, you wrote, with Sarah Quincy, a paper called Changes in the College Mobility Pipeline Since 1900, which looks a long way back. And you argue that the relative premium received by lower-income Americans from higher education has fallen by half since 1960. Take us through what you found—give us the 90-second elevator pitch.
Zachary Bleemer (ZB): Consider kids who were born in 1900 and were choosing whether or not to go to college in the late 1910s and early 1920s. What we were interested in was that choice, and in particular, following people for the next 20 years after they made it. Some people graduated high school but didn’t go to college, while others graduated high school and chose to go.
We wanted to compare the differences in early 1930s wages between those two groups—both for kids from lower-income backgrounds and kids from upper-income backgrounds. Now, you might be surprised to learn that there were lower-income kids going to college in the U.S. in the early 1920s, but there were. About 5 to 10% of people from the bottom parental income tercile even then were attending college.
What we found, when we linked together historical U.S. census records and followed kids forward, is that whether you were low-income or high-income, if you went to college your wages went up a lot. And the degree to which your wages went up was independent of whether you were low-income or high-income—everyone benefited similarly from going to college.
If you compare that to kids born in the 1980s, who were choosing to go to college in the late 1990s and early 2000s, you see a very different story. Everyone still gains from going to college, but kids from rich backgrounds gain a lot more—more than twice as much as kids from poor backgrounds. And that’s despite the fact they’re making the same choice. They’re going to different universities and studying different things, but when it comes down to the 18-year-old making a decision, those from poor families are just getting less from American higher education now than they did in the past—or compared to kids from rich backgrounds.
AU: I want to make sure I understand this, because it’s a crucial part of your argument. When you talk about relative premiums—premium compared to what, and relative compared to what?
ZB: What we always have in mind is the value of college for rich kids, and then asking: how much of that value do poor kids get too? In the early 20th century, and as late as the 1960s, those values were very similar. Lower-income kids were getting somewhere between 80 and 100% of the value of going to college as higher-income kids.
AU: And by “value,” you mean…
ZB: That just means how much your wages go up. So, the wage bump for lower-income kids was very similar to that of higher-income kids. Today, though, it’s more like half—or even a little less than half—of the economic value of college-going that lower-income kids receive compared to higher-income kids.
AU: So in effect, higher education is acting as an engine of greater inequality. That’s what you’re saying?
ZB: I guess it’s worth saying that lower-income kids who go to college are still getting ahead. But it’s not as much of a pipeline as it used to be. Higher education used to accelerate lower-income kids—not to the same level of income as their higher-income peers; they were never going to catch up—but at least they got the same bump, just from a lower starting point.
AU: So the gap widens now. But how do you make a claim like that over 120 years? I mean, I sometimes have a hard time getting data for just one year. How do you track college premiums across a period of 120 years? How sound is the empirical basis for this? You mentioned something about linking data to census records, which obviously go back quite a way. So tell us how you constructed the data for this.
ZB: The first-order answer is that I called up and worked with an economic historian who had much more experience with historical data than I did. Like you said, it’s hard in any period to get high-quality data that links students in high school—especially with information on their parental income—to wage outcomes 10 or 15 years later.
What we did was scan around for any academic or government group over the last 120 years that had conducted a retrospective or longitudinal survey—where you either follow kids for a while, or you find a bunch of 30-year-olds and ask them questions about their childhood. We combined all of these surveys into a comprehensive database.
In the early 20th century, that meant linking kids in the 1920 census, when they were still living with their parents, to the same kids in the 1940 census, when they were in their early thirties and working in the labor market. That link has been well established by economic historians and used in a large series of papers.
By the middle of the 20th century, sociologists were conducting very large-scale longitudinal surveys. The biggest of these was called Project Talent, put together by the American Institutes for Research in 1961. They randomly sampled over 400,000 American high school students, collected a ton of information, and then re-surveyed them between 1971 and 1974 to ask what had happened in their lives.
In more recent years, there’s been a large set of governmental surveys, primarily conducted by the Departments of Labor and Education. Some of these will be familiar to education researchers—like the National Longitudinal Survey of Youth (NLSY). Others are less well known, but there are lots of them. All we did was combine them all together.
AU: I noticed in one of the appendices you’ve got about nine or ten big surveys from across this period. I guess one methodological limitation is that they don’t all follow respondents for the same amount of time, and you’d also be limited to questions where the surveys provided relatively similar answers. You never get your dream data, but those would be the big limitations—you’ve got to look for the similarities, and that restricts you.
ZB: I’d add another restriction. You’re right that, as we filtered down which datasets we could use, the key variables we needed were: parental income when the student was in high school, level of education by age 30, and how much money they made at some point between ages 30 and 35. All of our surveys had those variables.
We also looked for information about what college they attended and what their college major was. Ideally, the surveys also included some kind of high school test—like the SAT or an IQ test—so we could see what kinds of students from what academic backgrounds were going to college.
But there was another key limitation. In most of the data before 1950, it was really difficult to get a direct measure of parental income. Instead, we usually had proxies like parental occupation, industry, or level of education—variables that are highly predictive of income, but not income itself.
So, a lot of the work of the paper was lining up these measures of varying quality from different surveys to make sure the results we report aren’t just noise from mismeasurement, but instead reflect real changes on the ground in American higher education.
AU: So you ran the data and noticed there was a sharp inflection point—or maybe not sharp, but certainly things started to get worse after 1960. When you first saw that, what were your hypotheses? At that point, you’ve got to start looking at whatever variables you can to explain it. What did you think the answer was, and what did you think the confounding variables might be?
ZB: My expectation was that two things would primarily explain the change. My background is in studying undergraduate admissions, so I thought the first explanation would be rising meritocracy in admissions. That might have made it harder for lower-income and lower-testing kids to get access to high-quality education. I also thought changes in affirmative action and in access to selective schools for kids from different backgrounds, along with rising tuition that made it harder for lower-income kids to afford those schools, could have played a big role. That was one possible story.
The second possible story is that it had nothing to do with the causal effect of college at all. Instead, maybe the poor kids who go to college today aren’t as academically strong as they were in the past. Perhaps in the past only the brilliant poor kids went to college, while all the rich kids went regardless of ability. So it could have looked like poor kids were getting a big benefit from college, when in fact those few who made it would have done well anyway.
It turns out neither of these explanations is the primary driver of rising regressivity. On the test score story, it’s always been the case that rich kids who go to college have relatively higher test scores than rich kids who just graduate high school—and that poor kids who go to college have relatively lower scores compared to their peers. That hasn’t changed since 1960.
And on the access story, it’s always been the case that rich kids dominate the schools we now think of as “good”—the fancy private universities and the flagship public universities. But over the last 50 years, poor kids have actually slightly increased their representation at those schools, not the other way around. Rising meritocracy hasn’t pushed poor kids out. If anything, the variety of admissions programs universities have implemented to boost enrollment among racial minority and lower-income students has relatively increased their numbers compared to 1950 or 1960.
AU: You were just making the case that this isn’t about compositional change in where poor students went. I heard you say there are more lower-income students at Harvard, Yale, and MIT than there were 50 or 60 years ago—and I have no doubt that’s true. But as a percentage of all poor students, surely that’s not true. The vast wave of lower-income students, often from minority backgrounds, are ending up in community colleges or non-flagship publics. Surely that has to be part of the story.
ZB: Yes. It turns out there are three primary trends that explain this rising collegiate regressivity, and you just hit on two of them.
The first is exactly your point: lower-income students primarily go to satellite public universities, basically all the non–R1 publics. Higher-income students, if they attend a public university, tend to go to the flagship, research-oriented universities.
I’ll skip talking about Harvard, Yale, and Princeton—almost no one goes to those schools, and they’re irrelevant to the overall landscape.
AU: Because they’re such a small piece of the pie, right?
ZB: Exactly. Fewer than 1% of students attend an Ivy Plus school. They don’t matter when we’re talking about American higher education as a whole. The flagships, though, matter a lot. About a third of all four-year college students go to a research-oriented flagship public university.
What’s happened since 1960 isn’t that poor kids lost access to those schools—it’s that they never really had access in the first place. Meanwhile, those schools have gotten much better over time. If you look at simple measures of university quality—student-to-faculty ratios, instructional expenditures per student, graduation rates—or even our own wage “value-added” measures (the degree to which each university boosts students’ wages), the gap between flagship and non-flagship publics has widened dramatically since the 1960s.
The flagships have pulled away. They’ve gotten more money—both from higher tuition and from huge federal subsidies, in part for research—and they’ve used that money to provide much more value to the students who attend. And those students tend to be higher income.
The second trend is what you mentioned: increasing diversion to community colleges. Interestingly, before 1980, community colleges were already well established in the U.S. and enrolled only slightly more lower-income than higher-income students. They actually enrolled a lot of high-income students, and the gap was small. Since the 1980s, though, that gap has grown substantially. There’s been a huge diversion of lower-income students toward community colleges—and those schools just provide lower-value education to the students who enroll.
AU: At some level this is a sorting story, right? You see that in discussions about American economic geography—that people sort themselves into certain areas. Is that what you’re saying is happening here too?
ZB: It’s not about sorting inside the four-year sector. It’s about sorting between the two- and four-year sectors. And on top of that, we think there’s fundamentally a story about American state governments choosing to invest much more heavily in their flagship publics—turning them into gem schools, amazing schools—while leaving the other universities in their states behind. Those flagships enroll far more higher-income than lower-income students.
AU: When I was reading this paper, one thing that struck me was how hard it is to read about American higher education without also reading something about race. The last time you were on, we were talking about SCOTUS and the Fair Harvard decision. But as far as I can tell, this paper doesn’t talk about race. I assume that goes back to our earlier discussion about data limitations—that race just wasn’t captured at some point. What’s the story there?
ZB: No—we observe race throughout this entire period. In fact, you could basically rewrite our study and ask: how has the relative value of college for white kids compared to Black kids changed over the last hundred years? I suspect you’d see very similar patterns.
The datasets we’re working with observe both parental income and race, but they aren’t large enough to separately analyze, for example, just white students and then compare lower- and higher-income groups over time. There’s a sense in which you could tell our story in terms of race, or you could tell it in terms of class—and both would be right. At a first-order level, both are happening. And within racial groups, the evidence we’ve been able to collect suggests that class gaps have substantially widened over time.
Similarly, we show some evidence that even within the lower-income group there are substantial gaps between white and Black students. So in part, I saw this as an interesting complement to the work I’d already done on race. It points out that while race is part of the story, you can also reframe the entire conversation in terms of America’s higher education system leaving lower-income students behind—irrespective of race.
AU: Right, because it strikes me that 1960 is only six years after Brown v. Board of Education. By the early to mid-1960s, you’d start to see a bigger push of Black students entering higher education, becoming a larger share of the lower-income sector. And a few years later, the same thing with Latino students.
Suddenly lower-income students are not only starting from further behind, but also increasingly made up of groups who, irrespective of education, face discrimination in the labor market. Wouldn’t that pull things down? Wouldn’t that be part of the explanation?
ZB: Keep in mind that when we measure wage premiums, we’re always comparing people who went to college with people who only finished high school. So there are Black students on both sides of that comparison, across both lower- and higher-income groups.
That said, I think your point is well taken. We don’t do any work in the paper specifically looking at changes in the racial composition of students by parental income over this period. One thing we do show is that the test scores of lower-income students who go to college aren’t falling over time. But you’re probably right: while racial discrimination affects both college-goers and non-college-goers, it’s entirely plausible that part of what we’re picking up here is the changing racial dynamics in college-going.
AU: What’s the range of policy solutions we can imagine here, other than, you know, taking money away from rich publics and giving it to community colleges? That’s the obvious one to me, but maybe there are others.
ZB: And not just community colleges—satellite publics as well. I’ve spent the last five years of my life thinking about how to get more disadvantaged students into highly selective universities, and what happens when they get there. The main takeaway from that research is that it’s really hard to get lower-income students into highly selective universities. It’s also expensive, because of the financial aid required.
But once they get into those schools, they tend not only to benefit in terms of long-run wage outcomes, they actually derive disproportionate value. Highly selective schools are more valuable for lower-income kids than for the higher-income kids who typically enroll there.
What I’ve learned from this project, though, is that the closing of higher education’s mobility pipeline isn’t fundamentally about access. It’s about investments—by state governments, by students, by donors, by all the people and organizations that fund higher education. Over time, that funding has become increasingly centralized in schools that enroll a lot of wealthy students.
So, the point you brought up—redirecting funds—is important. In California they call it “rebenching”: siphoning money away from high-funded schools and pushing it toward low-funded schools. There’s very little academic research on what happens when you do that, but our study suggests that this century-long trend of unequal investment has disadvantaged low-income students. Potentially moving in the other direction could make a real difference for them.
AU: Zach, thanks so much for being with us today.
ZB: My pleasure.
AU: It just remains for me to thank our excellent producers, Tiffany MacLennan and Sam Pufek, and you, our listeners and readers, for joining us. If you have any questions or comments about today’s podcast, or suggestions for future editions, don’t hesitate to get in touch at [email protected].
Join us next week when our guest will be Dmitry Dubrovsky, a research scholar and lecturer at Charles University in Prague. He’ll be talking to us about the slow-motion collapse of Russian higher education under Vladimir Putin. Bye for now.
*This podcast transcript was generated using an AI transcription service with limited editing. Please forgive any errors made through this service.Please note, the views and opinions expressed in each episode are those of the individual contributors, and do not necessarily reflect those of the podcast host and team, or our sponsors.
The problem with findings like “1.5 per cent of students said they were in intimate relationships with staff” is the danger of extrapolation.
It’s in the results of the Office for Students (OfS) first sector-wide sexual misconduct survey – covering final year undergraduates in England who chose to take part in a clearly labelled bolt-on to the National Student Survey (NSS) earlier this year, with a response rate of just 12.1 per cent.
But 1.5 per cent of final-year undergraduates at English providers reporting “intimate” staff-student relationships in the past 12 months still feels like a lot – especially when half involved staff members who were engaged in the student’s education and/or assessment.
One in four respondents (24.5 per cent) said they’ve experienced sexual harassment since starting university, and 14.1 per cent declare experiencing sexual assault or violence.
Most incidents involved fellow students – with 58.4 per cent of harassment cases and 44.1 per cent of assault cases (taking place off-campus) involving someone connected to the victim’s institution.
OfS has published a dashboard of the results, an analysis report, a guide for students and a press release where the bullets slightly are less careful about extrapolation than I’ve been above. Another report to come later will provide more detailed analysis, including results for different combinations of characteristics and findings by academic subject.
The exercise represents OfS’ first real attempt to gather national prevalence data on sexual misconduct affecting students, having initially promised to do so back in 2022 in the context of its new Condition E6. That requires providers to take “multiple steps which could make a significant and credible difference in protecting students”.
The survey covered three main areas – sexual harassment experiences, sexual assault and violence, and intimate staff-student relationships. Questions also included detailed behavioural descriptions to ensure accurate prevalence measurement.
As such, the approach built on a 2023 pilot study involving volunteer providers. Since then, OfS has shortened the questionnaire whilst maintaining its core elements, leveraging NSS infrastructure to achieve national scale coverage – although for now, none of the devolved nations have taken part.
It’s worth noting that response patterns showed quite a bit of variation between demographic groups. Students with disabilities, female students, and LGB+ students were both more likely to respond and more likely to report misconduct – creating some quite complex interpretation challenges for understanding true prevalence rates.
Prevalence patterns and vulnerable groups
That set aside, the results show consistent vulnerability patterns across both harassment and assault. Female student respondents reported harassment rates of 33 per cent compared to significantly lower rates among males. Student respondents with disabilities experienced harassment at 34.7 per cent and assault at 22.1 per cent – higher than those without disabilities.
Sexual orientation showed significant differences. Lesbian, gay and bisexual respondents reported harassment rates of 46.6 per cent and assault rates of 29.8 per cent, nearly double the overall population rates. Those identifying as having “other sexual orientation” also showed elevated rates – at 40.1 per cent for harassment and 23.3 per cent for assault.
Age was also a key factor, with those under 21 at course start showing higher vulnerability rates – 31.2 per cent experienced harassment and 18.2 per cent experienced assault.
In terms of behaviours, the survey found “making sexually suggestive looks or staring at your body” affected 16.7 per cent of all respondents – the most common individual harassment behaviour. This was followed by “making unwelcome sexual comments or asking sexualised questions about your private life, body, or physical appearance.”
The patterns have direct relevance for E6’s training requirements, which mandate that induction sessions ensure students “understand behaviour that may constitute harassment and/or sexual misconduct.” The prevalence of apparently “lower-level” behaviours like staring suggests providers need to address misconceptions about what constitutes harassment – particularly given the survey’s use of legal definitions from the Equality Act 2010 and Protection from Harassment Act 1997.
There were also interesting patterns across socioeconomic and ethnic lines that deserve interrogation. Those from the least deprived areas (IMD quintile 5) reported higher harassment rates at 32.6 per cent, but so did those not eligible for free school meals, who showed elevated rates at 32.9 per cent. And mixed ethnicity respondents reported harassment at 31.5 per cent compared to 27.9 per cent among white students.
Where groups showed higher misconduct rates, part of the problem is that we can’t be sure whether that reflects reporting confidence, different social environments, or varying exposure patterns – all things providers will need to understand to make progress on the “credible difference” thing.
The ethnic dimension also intersects with religious identity, with Jewish respendents (29.8 per cent), those with no religion (30.5 per cent), and those from “any other religion” (35.5 per cent) showing elevated harassment rates. Again, differential intersectional patterns should align with E6’s requirements for providers to understand their specific student populations and tailor interventions accordingly.
The reporting crisis
One of the survey’s most concerning findings relates to formal reporting rates. Only 13.2 per cent of respondents experiencing harassment in the past year made formal reports to their institutions. For sexual assault (in a university setting or involving someone connected to the university) reporting varied dramatically by age – just 12.7 per cent of under-21s reported incidents compared to 86.4 per cent of those aged 31 and above.
This reporting gap in turn creates a fundamental information deficit for universities attempting to understand campus culture and develop appropriate interventions. The data suggests institutions may be operating with incomplete intel – hampering attempts to comply with E6 requirements to understand student populations and implement effective protective measures.
E6 explicitly requires providers to offer “a range of different mechanisms” for making reports, including online and in-person options, and to “remove any unnecessary actual or perceived barriers” that might make students less likely to report. The survey’s findings suggest the mechanisms may not be reaching their intended audiences, particularly younger students.
Among those who did report, experiences were mixed. For harassment cases, 46.7 per cent rated their reporting experience as good whilst 39.3 per cent rated it as poor. Sexual assault reporting showed slightly better outcomes, with 57.3 per cent rating experiences as good and 32.4 per cent as poor. These are findings that directly relate to E6’s requirements – and suggest the sector has some way to go to build confidence in the processes it does have.
The condition mandates that providers ensure “investigatory and disciplinary processes are free from any reasonable perception of bias” and that affected parties receive “sufficient information to understand the provider’s decisions and the reasons for them.” The proportion rating experiences as poor does suggest that some providers are struggling to meet E6’s procedural fairness requirements.
University connections and scope of misconduct
Jurisdiction has always been a contested issue in some policies – here, misconduct frequently involved university-connected individuals even when incidents occurred off-campus. Among harassment cases not occurring in university settings, 58.4 per cent involved someone connected to the victim’s university. For assault cases, that figure was 44.1 per cent.
Student perpetrators dominated both categories. Staff perpetrators appeared less frequently overall, though older students were more likely than younger groups to report staff involvement in assault cases.
In E6 terms, the condition explicitly covers “the conduct of staff towards students, and/or the conduct of students towards students” and applies to misconduct “provided in any manner or form by, or on behalf of, a provider.” The data suggests universities’ efforts will need to explicitly extend beyond physical premises to encompass behaviour involving community members regardless of location.
In fact, most recent harassment incidents occurred either entirely outside university settings (39.7 per cent) or across mixed locations (45.1 per cent), with only 15.2 per cent occurring exclusively in university settings. For sexual assault, 61.9 per cent occurred outside university settings entirely.
The patterns all point to providers needing sophisticated approaches to addressing misconduct that span campus boundaries. Traditional safety measures, or at least student perceptions of jurisdiction, might well miss the majority of incidents affecting students – broader community engagement and partnership approaches will need to be deployed.
Support confidence
The survey also examined’ confidence in seeking institutional support – finding 67.5 per cent felt confident about where to seek help, whilst 29.3 per cent lacked confidence. But confidence levels varied significantly across demographic groups, with particular variations by sexual orientation, sex, disability status, and age.
The differential confidence patterns also justify the E6 requirement for providers to ensure “appropriate support” is available and targeted at different student needs. It specifically requires support for students “with different needs, including those with needs affected by a student’s protected characteristics.”
The age-related reporting gap suggests younger students may face particular barriers to accessing institutional processes. This could relate to unfamiliarity with university systems, power dynamics, or different attitudes toward formal complaint mechanisms. For sexual assault cases, the contrast between 12.7 per cent reporting among under-21s versus 86.4 per cent among over-31s represents one of the survey’s most striking findings.
The age-related patterns have specific relevance given E6’s training and awareness requirements. The condition requires providers to ensure students are “appropriately informed to ensure understanding” of policies and behaviour constituting misconduct. The survey suggests the requirement may need particular attention for younger students – they’re showing both higher vulnerability and lower reporting rates.
Staff-student relationships
The survey’s staff-student relationship findings are a small proportion of the student population – but they do raise real questions about power dynamics and institutional governance.
Among the 1.5 per cent reporting those relationships, the high proportion involving educational or professional responsibilities suggest significant potential conflicts of interest.
Respondent students without disabilities were more likely to report relationships involving educational responsibility (72.6 per cent versus 45.5 per cent for disabled students), and similar patterns emerged for professional responsibilities. The differences deserve investigation, particularly given disabled students’ higher overall misconduct rates.
E6’s requirements on intimate personal relationships require that providers implement measures making “a significant and credible difference in protecting students from any actual or potential conflict of interest and/or abuse of power.”
The survey’s power dynamic findings suggest the requirement is needed – although whether the most common approach that has emerged (a ban where there’s a supervisory relationship, and a register where there isn’t) creates the right “culture” is a remaining question, given students’ views in general on professional boundaries.
Regulatory implications
The survey’s findings raise real questions about how OfS will use prevalence data in its regulatory approach. Back in 2022, Susan Lapworth told the House of Commons Women and Equalities Committee hearing that the data would enable the targeting of interventions:
“So a university with high prevalence and low reporting would perhaps raise concerns for us – and we would want to then understand in detail what was going on there and that would allow us to focus our effort.
Of course, as with Access and Participation, having national data on “which kinds of students in which contexts are affected by this” could well mean that what shows up in provider data as a very small problem could add up to a lot across the country. OfS’ levers in these contexts are always limited.
The lack of survey coverage of postgraduate students in general turns up here as a major problem. We might theorise that most exhibit multiple theoretical vulnerabilities given the dominance of international students and students who have supervisors – patience with OfS’ focus on undergraduates really is wearing thin each time it manifests.
The report also doesn’t look at home vs international student status, and nor does it disaggregate results by provider mission group, size, type, or characteristics. It only states that all eligible English providers in NSS 2025 were included, and that data are weighted to be representative of final-year undergraduates across the sector. Providers are also (confidentially) receiving their data – although response rates down at provider level may make drawing conclusions in the way originally envisaged difficult.
The dramatic under-reporting rates create monitoring challenges for both institutions and OfS. If only 13.2 per cent of harassment victims make formal reports, institutional complaint statistics provide limited insight into actual campus culture. The information gap complicates E6 compliance assessment – and suggests OfS may need alternative monitoring approaches beyond traditional complaint metrics.
E6 does explicitly contemplate requiring providers to “conduct a prevalence survey of its whole student population to the OfS’s specification” where there are compliance concerns. The 2025 survey’s methodology and findings provide a template, but it also seems to me that more contextual research – like that found in Anna Bull’s research from a couple of years back – is desperately needed to understand what’s going on beneath many of the numbers.
Overall though, I’m often struck by the extent to which providers argue that things like E6 are an over-reach or an example of “burden”. On this evidence, even with all the caveats, it’s nothing like the burden being carried by victims of sexual misconduct.
A concerningly high number of students – particularly LGBTQ+ and disabled people, as well as women – are subjected to sexual violence and harassment while studying in higher education. Wonkhe’s Jim Dickinson reviews the findings elsewhere on the site.
The data is limited to final year undergraduates who filled out the National Student Survey, who were then given the option to fill out this further module. OfS’ report on the data details the proportion of final year students who experienced sexual harassment or violence “since being a student” as well as their experiences within the last 12 months.
It also includes data on experiences of reporting, as well as prevalence of staff-student intimate relationships – but its omission of all postgraduate students, as well as all undergraduates other than final year students means that its findings should be seen as one piece of a wider puzzle.
Here, I try to lay out a few of the other pieces of the puzzle to help put the new data in context.
The timing is important
On 1st August 2025 the new condition of registration for higher education providers in England came into force, which involves regulatory requirements for all institutions in England to address harassment and sexual misconduct, including training for all staff and students, taking steps to “prevent abuses of power” between staff and students, and requiring institutions to publish a “single, comprehensive source of information” about their approach to this work, including support services and handling of reports.
When announcing this regulatory approach last year, OfS also published two studies published in 2024 – a pilot prevalence survey of a small selection of English HEIs, as well as a ‘poll’ of a representative sample of 3000 students. I have discussed that data as well as the regulation more generally elsewhere.
In this year’s data release, 51,920 students responded to the survey with an overall response rate of 12.1 per cent. This is significantly larger sample size than both of the 2024 studies, which comprised responses from 3000 and 5000 students respectively.
This year’s survey finds somewhat lower prevalence figures for sexual harassment and “unwanted sexual contact” than last year’s studies. In the new survey, sexual harassment was experienced by 13.3 per cent of respondents within the last 12 months (and by 24.5 per cent since becoming a student), while 5.4 per cent of respondents had been subjected to unwanted sexual contact or sexual violence within the last 12 months (since becoming a student, this figure rises to 14.1 per cent).
By any measure, these figures represent a very concerning level of gender-based violence in higher education populations. But if anything, they are at the lower end of what we would expect.
By comparison, in OfS’ 2024 representative poll of 3000 students, over a third (36 per cent) of respondents had experienced some form of unwanted sexual contact since becoming a student with a fifth (21 per cent) stating the incident(s) happened within the past year. 61 per cent had experienced sexual harassment since being a student, and 43 per cent of the total sample had experienced this in the past year.
The lower prevalence in the latest dataset could be (in part) because it draws on a population of final year undergraduate students – studies from the US have repeatedly found that first year undergraduate students are at the greatest risk, especially when they start their studies.
Final year students may simply have forgotten – or blocked out – some of their experiences from first year, leading to lower prevalence. They may also have dropped out. The timing of the new survey is also important – the NSS is completed in late spring, while we would expect more sexual harassment and violence to occur when students arrive at university in the autumn.
A study carried out in autumn or winter might find higher prevalence. Indeed, the previous two studies carried out by OfS involved data collected at different times to year – in August 2023 (for the 3000-strong poll) and ‘autumn 2023’ (for the pilot prevalence study).
A wide range of prevalence
Systematic reviews published in 2023 from Steele et al and Lagdon et al from across the UK, Ireland and the US have found prevalence rates of sexual violence between 7 per cent to 86 per cent.
Steele et al.’s recent study of Oxford University found that 20.5 per cent of respondents had experienced at least one act of attempted or forced sexual touching or rape, and 52.7 per cent of respondents experienced at least one act of sexual harassment within the past year.
Lagdon et al.’s study of “unwanted sexual experiences” in Northern Ireland found that a staggering 63 per cent had been targeted. And my own study of a UK HEI found that 30 per cent of respondents had been subjected to sexual violence since enrolling in their university, and 55 per cent had been subjected to sexual harassment.
For now, I don’t think it’s helpful to get hung up on comparing datasets between last year and this year that draw on somewhat different populations. It’s also not necessarily important that respondents were self-selecting within those who filled out the NSS – a US study compared prevalence rates for sexual contact without consent among students between a self-selecting sample and a non-self-selecting sample, finding no difference.
The key take-home message is that students are being subject to a significant level of sexual harassment and violence, and particularly women, LGBTQ+ and disabled students are unable to access higher education in safety.
Reporting experiences
The findings on reporting reveals some important challenges for the higher education sector. According to the OfS new survey findings, rates of reporting to higher education institutions remain relatively low at 13.2 per cent of those experiencing sexual harassment, and 12.7 per cent of those subjected to sexual violence.
Of students who reported to their HEI, only around half of rated their experience as “good”. But for women as well as for disabled and LGBTQ+ students there were much lower rates of satisfaction with reporting than men, heterosexuals and non-disabled students who reported incidents to their university.
This survey doesn’t reveal why students were rating their reporting experiences as poor, but my study Higher Education After #MeToo sheds light on some of the reasons why reporting is not working out for many students (and staff).
At the time of data collection in 2020-21, a key reason was that – according to staff handling complaints – policies in this area were not yet fit for purpose. It’s therefore not surprising that reporting was seen as ineffective and sometimes harmful for many interviewees who had reported. Four years on, hopefully HEIs have made progress in devising and implementing policies in this area, so other reasons may be relevant.
A further issue focused on by my study is that reporting processes for sexual misconduct in HE focus on sanctions against the reported party rather than prioritising safety or other needs of those who report. Many HEIs do now have processes for putting in place safety (“precautionary” or “interim”) measures to keep students safe after reporting.
Risk assessment practices are developing. But these practices appear to be patchy and students (and staff) who report sexual harassment or violence are still not necessarily getting the support they need to ensure their safety from further harm. Not only this, but at the end of a process they are not usually told the actions that their university has taken as a result of the report.
More generally, there’s a mismatch between why people report, and what is on offer from universities. Forthcoming analysis of the Power in the Academy data on staff-student sexual misconduct reveals that by the time a student gets to the point of reporting or disclosing sexual misconduct from faculty/staff to their HEI, the impacts are already being felt more severely than those who do not report.
In laywoman’s terms, if people report staff sexual misconduct, it’s likely to be having a really bad impact on their lives and/or studies. Reasons for reporting are usually to protect oneself and others and to be able to continue in work/study. So it’s crucial that when HEIs receive reports, they are able to take immediate steps to support students’ safety. If HEIs are listening to students – including the voices of those who have reported or disclosed to their institution – then this is what they’ll be hearing.
Staff-student relationships
The survey also provides new data on staff-student intimate relationships. The survey details that:
By intimate relationship we mean any relationship that includes: physical intimacy, including one-off or repeated sexual activity; romantic or emotional intimacy; and/or financial dependency. This includes both in person and online, or via digital devices.
From this sample, 1.5 per cent of respondents stated that they had been in such a relationship with a staff member. Of those who had been involved in a relationship, a staggering 68.8 per cent of respondents said that the university or college staff member(s) had been involved with their education or assessment.
Even as someone who researches within this area, I’m surprised by how high both these figures are. While not all students who enter into such relationships or connections will be harmed, for some, deep harms can be caused. While a much higher proportion of students who reported “intimate relationships” with staff members were 21 or over, age of the student is no barrier to such harms.
It’s worth revisiting some of the findings from 2024 to give some context to these points. In the 3000-strong representative survey from the OfS, a third of those in relationships with staff said they felt pressure to begin, continue or take the relationship further than they wanted because they were worried that refusing would negatively impact them, their studies or career in some way.
Even consensual relationships led to problems when the relationship broke up. My research has described the ways in which students can be targeted for “grooming” and “boundary-blurring” behaviours from staff. These questions on coercion from the 2024 survey were omitted from the shorter 2025 version – but assuming such patterns of coercion are present in the current dataset, these findings are extremely concerning.
They give strong support to OfS’ approach towards staff-student relationships in the new condition of registration. OfS has required HEIs to take “one or more steps which could make a significant and credible difference in protecting students from any actual or potential conflict of interest and/or abuse of power.”
Such a step could include a ban on intimate personal relationships between relevant staff and students but HEIs may instead chose to propose other ways to protect students from abuses of power from staff. While most HEIs appear to be implementing partial bans on such relationships, some have chosen not to.
Nevertheless, all HEIs should take steps to clarify appropriate professional boundaries between staff and students – which, as my research shows, students themselves overwhelmingly want.
Gaps in the data
The publication of this data is very welcome in contributing towards better understanding patterns of victimisation among students in HE. It’s crucial to position this dataset within the context of an emerging body of research in this area – both the OfS’ previous publications, but also academic studies as outlined above – in order to build up a more nuanced understanding of students’ experiences.
Some of the gaps in the data can be filled from other studies, but others cannot. For example, while the new OfS regulatory condition E6 covers harassment on the basis of all protected characteristics, these survey findings focus only on sexual harassment and violence.
National data on the prevalence of racial harassment or on harassment on the basis of gender reassignment would be particularly valuable in the current climate. This decision seems to be a political choice – sexual harassment and violence is a focus that both right- and left-wing voices can agree should be addressed as a matter of urgency, while it is more politically challenging (and therefore, important) to talk about racial harassment.
The data also omits stalking and domestic abuse, which young people – including students – are more likely than other age groups to be subjected to, according to the Crime Survey of England and Wales. My own research found that 26 per cent of respondents in a study of gender-based violence at a university in England in 2020 had been subjected to psychological or physical violence from a partner.
It does appear that despite the narrow focus on sexual harassment and violence from the OfS, many HEIs are taking a broader approach in their work, addressing domestic abuse and stalking, as well as technology-facilitated sexual abuse.
Another gap in the data analysis report from the OfS is around international students. Last year’s pilot study of this survey included some important findings on their experiences. International students were less likely to have experienced sexual misconduct in general than UK-domiciled students, but more likely to have been involved in an intimate relationship with a member of staff at their university (2 per cent of international students in contrast with 1 per cent of UK students).
They were also slightly more likely to state that a staff member had attempted to pressured them into a relationship. Their experiences of accessing support from their university were also poorer. These findings are important in relation to any new policies HEIs may be introducing on staff-student relationships: as international students appear to be more likely to be targeted, then communications around such policies need to be tailored to this group.
We also know that the same groups who are more likely to be subjected to sexual violence/harassment are also more likely to experience more harassment/violence, i.e. a higher number of incidents. The new data from OfS do not report on how many incidents were experienced. Sexual harassment can be harmful as a one-off experience, but if someone is experiencing repeated harassment or unwanted sexual contact from one or more others in their university environment (and both staff and student perpetrators are likely to be carry out repeated behaviours), then this can have a very heavy impact on those targeted.
The global context
Too often, policy and debate in England on gender-based violence in higher education fails to learn from the global context. Government-led initiatives in Ireland and Australia show good practice that England could learn from.
The data published by OfS is much more limited than these studies from other contexts in its focus on third year undergraduate students only. It will be imperative to make sure that HEIs, OfS, government or other actors do not rely solely on this data – and future iterations of the survey – as a tool to direct policy, interventions or practice.
Nevertheless, in the absence of more comprehensive studies, it adds another piece to the puzzle in understanding sexual harassment and violence in English HE.
I’ll admit that I use AI. I’ve asked it to help me figure out challenging Excel formulas that otherwise would have taken me 45 minutes and a few tutorials to troubleshoot. I’ve used it to help me analyze or organize massive amounts of information. I’ve even asked it to help me devise a running training program aligning with my goals and fitting within my schedule. AI is a fantastic tool–and that’s the point. It’s a tool, not a replacement for thinking.
As AI tools become more capable, more intuitive, and more integrated into our daily lives, I’ve found myself wondering: Are we growing too dependent on AI to do our thinking for us?
This question isn’t just philosophical. It has real consequences, especially for students and young learners. A recent study published in the journal Societies reports that people who used AI tools consistently showed a decline in critical thinking performance. In fact, “whether someone used AI tools was a bigger predictor of a person’s thinking skills than any other factor, including educational attainment.” That’s a staggering finding because it suggests that using AI might not just be a shortcut. It could be a cognitive detour.
The atrophy of the mind
The term “digital dementia” has been used to describe the deterioration of cognitive abilities as a result of over-reliance on digital devices. It’s a phrase originally associated with excessive screen time and memory decline, but it’s found new relevance in the era of generative AI. When we depend on a machine to generate our thoughts, answer our questions, or write our essays, what happens to the neural pathways that govern our own critical thinking? And will the upcoming era of agentic AI expedite this decline?
Cognitive function, like physical fitness, follows the rule of “use it or lose it.” Just as muscles weaken without regular use, the brain’s ability to evaluate, synthesize, and critique information can atrophy when not exercised. This is especially concerning in the context of education, where young learners are still building those critical neural pathways.
In short: Students need to learn how to think before they delegate that thinking to a machine.
Can you still think critically with AI?
Yes, but only if you’re intentional about it.
AI doesn’t relieve you of the responsibility to think–in many cases, it demands even more critical thinking. AI produces hallucinations, falsifies claims, and can be misleading. If you blindly accept AI’s output, you’re not saving time, you’re surrendering clarity.
Using AI effectively requires discernment. You need to know what you’re asking, evaluate what you’re given, and verify the accuracy of the result. In other words, you need to think before, during, and after using AI.
The “source, please” problem
One of the simplest ways to teach critical thinking is also the most annoying–just ask my teenage daughter. When she presents a fact or claim that she saw online, I respond with some version of: “What’s your source?” It drives her crazy, but it forces her to dig deeper, check assumptions, and distinguish between fact and fiction. It’s an essential habit of mind.
But here’s the thing: AI doesn’t always give you the source. And when it does, sometimes it’s wrong, or the source isn’t reputable. Sometimes it requires a deeper dive (and a few more prompts) to find answers, especially to complicated topics. AI often provides quick, confident answers that fall apart under scrutiny.
So why do we keep relying on it? Why are AI responses allowed to settle arguments, or serve as “truth” for students when the answers may be anything but?
The lure of speed and simplicity
It’s easier. It’s faster. And let’s face it: It feels like thinking. But there’s a difference between getting an answer and understanding it. AI gives us answers. It doesn’t teach us how to ask better questions or how to judge when an answer is incomplete or misleading.
This process of cognitive offloading (where we shift mental effort to a device) can be incredibly efficient. But if we offload too much, too early, we risk weakening the mental muscles needed for sustained critical thinking.
Implications for educators
So, what does this mean for the classroom?
First, educators must be discerning about how they use AI tools. These technologies aren’t going away, and banning them outright is neither realistic nor wise. But they must be introduced with guardrails. Students need explicit instruction on how to think alongside AI, not instead of it.
Second, teachers should emphasize the importance of original thought, iterative questioning, and evidence-based reasoning. Instead of asking students to simply generate answers, ask them to critique AI-generated ones. Challenge them to fact-check, source, revise, and reflect. In doing so, we keep their cognitive skills active and growing.
And finally, for young learners, we may need to draw a harder line. Students who haven’t yet formed the foundational skills of analysis, synthesis, and evaluation shouldn’t be skipping those steps. Just like you wouldn’t hand a calculator to a child who hasn’t yet learned to add, we shouldn’t hand over generative AI tools to students who haven’t learned how to write, question, or reason.
A tool, not a crutch
AI is here to stay. It’s powerful, transformative, and, when used well, can enhance our work and learning. But we must remember that it’s a tool, not a replacement for human thought. The moment we let it think for us is the moment we start to lose the capacity to think for ourselves.
If we want the next generation to be capable, curious, and critically-minded, we must protect and nurture those skills. And that means using AI thoughtfully, sparingly, and always with a healthy dose of skepticism. AI is certainly proving it has staying power, so it’s in all our best interests to learn to adapt. However, let’s adapt with intentionality, and without sacrificing our critical thinking skills or succumbing to any form of digital dementia.
Laura Hakala, Magic EdTech
Laura Hakala is the Director of Online Program Design and Efficacy for Magic EdTech. With nearly two decades of leadership and strategic innovation experience, Laura is a go-to resource for content, problem-solving, and strategic planning. Laura is passionate about DE&I and is a fierce advocate, dedicated to making meaningful changes. When it comes to content management, digital solutions, and forging strategic partnerships, Laura’s expertise shines through. She’s not just shaping the future; she’s paving the way for a more inclusive and impactful tomorrow.
Latest posts by eSchool Media Contributors (see all)
This audio is auto-generated. Please let us know if you have feedback.
COLUMBUS, OHIO — Artificial intelligence-based products and software for college admissions and operations are proliferating in the higher education world.
How to choose from among them? Well, leaders can start by identifying a problem that is actually in need of an AI solution.
That is one of the core pieces of advice from a panel on deploying AI technology responsibly in college administration at the National Association for College Admission Counseling’s conference last week.
Jasmine Solomon, senior associate director of systems operations at New York University, described a “flooded marketplace” of AI products advertised for a range of higher ed functions, from tutoring systems to retention analytics to admissions chatbots.
“Define what your AI use case is, and then find the purpose-built tool for that,” Solomon said. “If you’re using a general AI model or AI tool for an unintended purpose, your result is going to be poor.”
Asking why before you buy
It’s also worth considering whether AI is the right tool.
“How does AI solve this problem better? Because maybe your team or the tools that you already have can solve this problem,” Solomon said. “Maybe you don’t need an AI tool for this.”
Experts on the panel pointed out that administrators also need to think about who will use the tool, the potential privacy pitfalls of it, and its actual quality.
As Solomon put it, “Those built-in AI features — are they real? Are they on a future-release schedule, or is it here now? And if it’s here now, is it ready for prime time or is it ‘here now, and we’re beta testing.’”
Other considerations in deploying AI include those related to ethics, compliance and employee contracts.
Institutions need to be mindful of workflows, staff roles, data storage, privacy and AI stipulations in collective bargaining contracts, said Becky Mulholland, director of first-year admission and operations at the University of Rhode Island.
“For those who are considering this, please, please, please make sure you’re familiar with those aspects,” Mulholland said. “We’ve seen this not go well in some other spaces.”
On top of all that is the environmental impact of AI. One estimate found that AI-based search engines can use as much as 30 times more energy than traditional search.The technology also uses vast amounts of water to cool data centers.
Panelists had few definitive answers for resolving AI’s environmental problems at the institutional level.
“There’s going to be a space for science to find some better solutions,” Mulholland said. “We’re not there right now.”
Solomon pointed to the pervasiveness of AI tools already embedded in much of our digital technology and argued untrained use could worsen the environmental impact.
“If they’re prompting [AI] 10, 20 times just to get the answer they want, they’ve used far more energy than if they understood prompt engineering,” Solomon said.
Transparency is also important. At NYU, Solomon said the university was careful to ensure prospective students knew they were talking with AI when interacting with its chatbot — so much so that they named the tool “NYUAdmissionsBot” to make its virtual nature as explicit as possible.
“We wanted to inform them every step of the way that you were talking to AI when you were using this chatbot,” Solomon said.
‘You need time to test it’
After all the big questions are asked and answered, and an AI solution chosen, institutions still have the not-so-small task of rolling the technology out in a way that is effective in both the short and long term.
The rollout of NYU’s chatbot in spring 2024 took “many, many months,” according to Solomon. “If a vendor tells you, ‘We will be up in a week,’ multiply that by like a factor of 10. You need time to test it.” The extra time can ensure a feature is actually ready when it’s unveiled for use.
The upside to all that time and effort for something like an admissions chatbot, Solomon noted, is that the AI feature can be available around-the-clock to answer inquiries, and it can quickly address the most commonly asked questions that would normally be flooding the inboxes of admissions staff.
But even after a successful initial rollout of an AI tool or feature, operations staff aren’t done.
Solomon described a continuous cycle of developing key metrics of success, running controlled experiments with an AI product and carefully examining data from AI use, including by having a human looking over the shoulder of the robots. In NYU’s case, this included looking at responses the chatbot gave to inquiries from prospective students.
“AI is evolving rapidly. So every six months, you really do want to test again, because it will be different,” Solomon said. “We did find that as we moved forward, we could decrease the number of hard-coded responses and rely more on the generative. And that was because the AI got better, but also because our knowledge got better.”
Solomon recommended regular error checks and performance audits and warned against overreliance on AI.
“AI is not a rotisserie. You don’t set it and forget it. It will burn it down,” she said. “It’s changing too fast.”
This audio is auto-generated. Please let us know if you have feedback.
Dive Brief:
Southern Oregon Universitywill eliminate 10 bachelor’s degrees, 12 minors and one graduate program in the face of long-term structural budget deficits after a vote by the institution’s board.
The public university will also lay off 18 employees and cut roughly three dozen other jobs through retirements, the elimination of vacant positions and other methods. SOU will shift 17 jobs off its payroll by funding them through alternative sources, such as the SOU Foundation, a nonprofit affiliated with the university.
The cuts are intended to stabilize SOU following years “marked by unprecedented fiscal crises,” according to the plan approved by trustees last week in a 7-2 vote.
Dive Insight:
SOU has faced a quartet of problems plaguing other higher education institutions — declining enrollment, flat state funding, rising costs and a shifting federal policy landscape.
The university’s full-time equivalent enrollment fell almost 22% from 4,108 students in 2015 to 3,209 in 2024, according to state data.
“It is also highly likely that the federal government’s intent to dismantle support systems for low-income students also will have a devastating impact,” the plan noted.
Earlier this year, the Trump administration sought to reduce funding to certain need-based student aid programs and eliminate others altogether, such as theFederal Supplemental Educational Opportunity Grant program. Since then, both chambers of Congress have rejected some of those overtures in their own budget proposals for fiscal year 2026, though House lawmakers likewise pitched eliminating FSEOG.
At the state level, Oregon’s fiscal 2025-27 budget raised funding for its public universities slightly. But SOU argued that the bump fails to cover increasing costs outside of its control, such as retirement and medical benefits.
In June, SOU’s board of trustees directed the university to find $5 million in savings by the end of fiscal 2026.
In response, University President Rick Bailey planned more significant cuts to set SOU up for longer-term stability. He declared financial exigency at the beginning of August, paving the way for a dramatic restructuring at the institution.
The plan pitched to SOU’s board Friday will cut more than $10 million from the university’s annual educational and general budget over the next four years, bringing it down to approximately $60 million total.
Academically, the proposal will sunset “low-enrolled or less regionally relevant programs” to focuson “what SOU does best for the majority of students,” it said.
Following the reduction, the university will offer a total of 30 majors and 19 minors meant to lead students toward interdisciplinary programs “aligned with regional workforce demands.”
“SOU is no longer a comprehensive university,” the plan said. “We cannot continue to provide all the programs and supports as we have in the past.”
Bachelor’s degrees slated for elimination include international studies, chemistry, Spanish and multiple mathematics programs. It will also cut a graduate leadership degree focused on outdoor expeditions.
Some programs originally considered for elimination — such as creative writing and economics — will go on with restructured curricula and face additional review in coming years.
The plan will also restructure SOU’s honors college and eliminate direct funding for its annual creativity conference.
During Friday’s meeting, board member Debra Fee Jing Lee supported the cuts, arguing SOU‘s strength moving forward will be based on its ability “to be lean and agile and entrepreneurial.”
Board member Elizabeth Shelby similarly voted for the proposal.
“It’s incumbent upon us to plan as we must for the next several years, even if that requires additional cuts,” she said.
But Hala Schepmann, a board member and chair of SOU’s chemistry and physics department, opposed the plan, calling it “the nuclear option.”
“Do we need to make immediate cuts? Yes,” she said. “But taking away key foundational components of our institution will make it harder for us to make progress.”
Schepmann also took issue with deciding on the plan amid “significant fluctuations” in the university’s projected budget.
This summer, SOU lowered projections for its expected revenue by $1.9 million after an internal analysis found “a multi-decade issue”of double-counting some online education tuition revenue.
The workforce reduction comes just two years after SOUeliminated nearly 82 full-time positions through a combination of layoffs, unfilled vacancies, voluntary reductions and retirements.
That wave of cuts left the remaining employees “feeling as though they were asked to do more with less,” according to the proposal.It argued that the new round of cuts will address this issue by paring down programs in tandem with shrinking the workforce.
As a way to help all of academia, colleges, universities, and other educational institutions around the world, I introduce the “Compass Framework for AI Literacy Integration into Higher Education.” This is a completely free (Creative Commons 4.0) AI literacy framework for easy and flexible integration of AI literacy into the curriculum. This framework is designed from my experience working with many universities around the world, reviewing other AI frameworks, and from various other research.
The AI literacy components are made up of: Awareness, Capability (including prompt engineering), Knowledge, and Critical Thinking (to include bias, ethics, environmental impacts, and avoiding overreliance.
This AI literacy framework also addresses student learning outcomes and provides specific examples of how this framework can be integrated without necessarily increasing credit requirements. Additional information is also presented dealing with needed subskills, advanced AI skills for degree-specific fields, alternative frameworks, and additional actions needed to ensure overall success with AI literacy integration.
An introductory video on this important and free AI literacy framework is available through the Sovorel Center for Teaching & Learning educational YouTube channel here:
The Compass Framework for AI Literacy Integration into Higher Education has been designed and made available for free by the Sovorel Center for Teaching & Learning. Please let us know you have used it, it has been helpful for your organization, or if you have any other feedback. Thank you very much, and we appreciate everyone’s ongoing support.
In a recent statement and a series of fireside chats, Education Secretary Linda McMahon repeatedly drew attention to her efforts to move all career, technical and adult education programs from the Department of Education to the Department of Labor and consolidate some as part of the Trump administration’s quest to eliminate waste, fraud and abuse.
“I can’t think of a more inefficient system than to have duplication and just one side not knowing what the other is doing,” she said at one conservative policy summit last week. “So let’s consolidate them all in the Department of Labor, where I think they should be. And if we show that this is an incredibly efficient and effective way to manage these programs, it is my hope that Congress will look at that and approve these moves.”
According to ED, many staff members from the Office of Career, Technical and Adult Education are already working under the supervision of the DOL, though the funding for the programs they oversee is still managed by McMahon. Moving that money, which was appropriated by Congress to the Education Department, would require legislative approval. But symbolically, the integration process is under way.
The Trump administration is not the first government body to propose or execute such a merger, however. A handful of states have combined their departments of higher education and workforce development agencies in the hopes of better aligning state budgets, curriculum and grant allocation with the needs of local employers. Missouri, for example, has been working since 2018 to integrate what was the Department of Higher Education and the Division of Workforce Development into a new Department of Higher Education and Workforce Development.
Inside Higher Ed spoke with the newly fused department’s commissioner, Bennett Boggs, and deputy commissioner, Leroy Wade, to understand how it came to be, what challenges they faced in the process and the benefits they’ve seen as a result.
The conversation has been edited for length and clarity.
Q: Take me back and tell me a little bit about what sparked this merger for Missouri.
Wade: There was a realization that we weren’t being as effective as we could be as a state in terms of our economic development. And so Governor Parson, at that point, put together a group called Talent for Tomorrow that looked at what direction do we need to go and what kinds of things do we need to focus on? And then there was an ancillary piece called Best in the Midwest that looked around at our surrounding states to decide, from an economic development, from a workforce perspective, how are we doing?
Unfortunately, what we found was that we weren’t doing really very well. We were toward the bottom in almost everything that we looked at. And so out of that process and a listening tour all around the state to hear what folks wanted and what their perspective was, came two things. One was to try and streamline our Department of Economic Development. The other piece was to look at, how do we change our pipeline system? And that’s what brought the Division of Workforce Development to merge with the Department of Higher Education
Q: How did the process of merging these two institutions work? Was it all led by the governor and the state executive branch, or did it require any legislative backing?
Wade: The governor has the authority to reorganize state government, if you will, at least to a certain extent. So the process started with an executive order laying out what that reorganization would look like. Now, the Legislature has a role in that process. They can’t make changes to it, but they can either vote to accept it or not. But it went through; there was no legislative opposition.
There was some existing statutory language that talked about the Department of Higher Ed, and so there had to be some language changes to adopt the new name of the organization and to reflect some of the structural changes that took place. But it was really all driven by that executive order.
Q: One of the core justifications we’ve heard from the Trump administration for merging the CTE operations at a federal level is to eliminate what they say are duplicate programs. When Missouri combined its agencies, was that one of your motivations as well, and did you find any duplicates to consolidate?
Boggs: What we have found here in Missouri is not so much duplication as an opportunity for coordination. A large part of it was about combining functions that have similar end goals but are not exactly the same program. It’s about asking, how can they be coordinated to be more effective together?
Commissioner Bennett Boggs
One of the answers to that is leveraging broader expertise. If you bring people and programs together and help broaden the perspective of the work that they’re doing, it allows the organization to move from silos to strategic partnerships.
For example, Missouri is very strong in registered apprenticeships. But it’s not just in the trades. We’ve also developed some really effective programs in education and health care and some other professional industries. Part of that is because we’ve been able now to coordinate with local workforce boards, local regional employers and then the two- and four-year institutions, particularly regional ones. Before, these three groups may have been unintentionally competing because they weren’t that aware of each other, but now by working together they can be a funding stream. They can bring resources together to help strengthen and accelerate workforce development that would not have happened if they kept operating separately.
It also just strengthens our communications and helps us as a department talk about higher education in terms of, how does this make life better for Missourians? And that’s a better, healthier conversation to be having.
Q: Despite the shared end goal, there had to be times where there was internal conflict in trying to streamline things. For example, if both an apprenticeship program and a health-care school are training hospital technicians, I can imagine they’re each trying to fill their own seats. So what were some of the challenges you faced in the merger process and how did you overcome them?
Boggs: We know in Missouri, 65 percent of all jobs currently require education or training past high school, and that number is only expected to grow. Of that, 35 percent would be an associate degree or some certification, and 30 percent would be a bachelor’s degree and above. So this is a statewide effort to create pathways for all Missourians—so this is not either-or, it’s yes-and.
Why can’t a student have an internship or a work-based learning experience while in a postsecondary institution? And so part of those regional partnerships is that they help us think about things like that. They’re not only preparing students for the current job but asking, how do we get them on a pathway to be ready for the next one?
One of the challenges in this is understanding that different sections of our department work in different time frames. For example, we run 21 job centers in the state, and when folks come in the door there, they need a job to pay the rent next week. We also have different parts of the department that are approving academic programs in a way that might not really take off for three to five years. It’s not only a difference in pace but also culture and lingo. We just have to be aware of each other and learn.
The other challenge is potential for misalignment related to policies, data and physical infrastructure. This really hits home in terms of our planning and budget folks. We now have an array of state and federal programs that support Missourians in paying for education, whether that’s state-funded financial aid or federal [Workforce Innovation and Opportunity Act] funds. These separate funding streams have different requirements, different reporting structures and eligibility criteria, so our staff then has to be able to think quickly, know about and pivot between multiple particular funding streams.
Q: We’ve also heard critics of the merger at the federal level suggest that there may be barriers in statute that make it difficult to merge or consolidate various programs and grants. Did you experience any difficulties legally when merging your two agencies?
Boggs: We didn’t encounter any legal hurdles, but as I was mentioning earlier, understanding the differences in the federal and state funding streams and the requirements and the structures and the eligibility requirements, those kinds of things had to get worked through.
Q: So would you say it was less about trying to change the existing rules and regulations for various programs and more trying to understand those stipulations in order to use the funding in a more strategic, collaborative way?
Boggs: I think that’s pretty accurate. It’s about keeping the similar end goal in mind, and then asking, what funds can be used to help advance to that shared goal?
Q: All challenges aside, over all, has this merger positively affected Missouri’s higher ed and workforce development landscape? And, if so, how?
Boggs: Absolutely. It’s changed the tone and the conversation statewide in terms of postsecondary education being part of economic and community development. It has pulled in strategic partners, from job centers, regional workforce boards, chambers of commerce and regional universities to have really interesting gatherings and talk about where they need to grow. And it makes for a better conversation about the cutting-edge research our flagship institution does. Over all, it helps us as a state have a better, more comprehensive conversation about learning and workforce development.
Q: Has the Trump administration reached out to you in an effort to learn from Missouri’s experiences in merging these two departments?
Boggs: No, but if they wanted to contact us, we’d be happy to assist however we could.
Q: Do you think there’s an opportunity for the federal government to learn from both the challenges and the successes that you have experienced at the state level?
Boggs: You know there’s a famous quote from Louis Brandeis that says, “States are the laboratories of democracy.”
I wouldn’t pretend to know what the federal government can take away from Missouri. They are operating at a much more complicated level, with many more components in play. But certainly in Missouri, we’ve had a good experience doing this, and we’re still discovering new areas for improvement all the time.
In fact, we’ve got a technical cleanup bill we’re proposing this upcoming legislative session of just small bits and pieces in the state statutes from before the merger that still need to be addressed. Part of what helped us out, though, is data—the integrating of some of the disparate data systems into now a more comprehensive data group, and that’s helping us statewide with better policymaking.