This blog was kindly authored by Liam Earney, Managing Director, HE and Research, Jisc.
The REF-AI report, which received funding from Research England and co-authored by Jisc and Centre for Higher Education Transformations (CHET), was designed to provide evidence to help the sector prepare for the next REF. Its findings show that Generative AI is already shaping the approaches that universities adopt. Some approaches are cautious and exploratory, some are inventive and innovative, and most of it is happening quietly in the background. GenAI in research practice is no longer theoretical; it is part of the day-to-day reality of research, and research assessment.
For Jisc, some of the findings in the report are unsurprising. We see every day how digital capability is uneven across the sector, and how new tools arrive before governance has had a chance to catch up. The report highlights an important gap between emerging practice and policy – a gap that the sector can now work collaboratively to close. UKRI has already issued guidance on generative AI use in funding applications and assessment: emphasising honesty, rigour, transparency, and confidentiality. Yet the REF context still lacks equivalent clarity, leaving institutions to interpret best practice alone. This work was funded by Research England to inform future guidance and support, ensuring that the sector has the evidence it needs to navigate GenAI responsibly.
The REF-AI report rightly places integrity at the heart of its recommendations. Recommendation 1 is critical to support transparency and avoid misunderstandings: every university should publish a clear policy on using Generative AI in research, and specifically in REF work. That policy should outline what is acceptable and require staff to disclose when AI has helped shape a submission.
This is about trust and about laying the groundwork for a fair assessment system. At present, too much GenAI use is happening under the radar, without shared language or common expectations. Clarity and consistency will help maintain trust in an exercise that underpins the distribution of public research funding.
Unpicking a patchwork of inconsistencies
We now have insight into real practice across UK universities. Some are already using GenAI to trawl for impact evidence, to help shape narratives, and even to review or score outputs. Others are experimenting with bespoke tools or home-grown systems designed to streamline their internal processes.
This kind of activity is usually driven by good intentions. Teams are trying to cope with rising workloads and the increased complexity that comes with each REF cycle. But when different institutions use different tools in different ways, the result is not greater clarity. It is a patchwork of inconsistent practices and a risk that those involved do not clearly understand the role GenAI has played.
The report notes that most universities still lack formal guidance and that internal policy discussions are only just beginning. In fact, practice has moved so far ahead of governance that many colleagues are unaware of how much GenAI is already embedded in their own institution’s REF preparation, or for professional services, how much GenAI is already being used by their researchers.
The sector digital divide
This is where the sector can work together, with support from Jisc and others, to help narrow the divide that exists. The survey results tell us that many academics are deeply sceptical of GenAI in almost every part of the REF. Strong disagreement is common and, in some areas, reaches seventy per cent or more. Only a small minority sees value in GenAI for developing impact case studies.
In contrast, interviews with senior leaders reveal a growing sense that institutions cannot afford to ignore this technology. Several Pro Vice Chancellors told us that GenAI is here to stay and that the sector has a responsibility to work out how to use it safely and responsibly.
This tension is familiar to Jisc. GenAI literacy is uneven, as is confidence, and even general digital capability. Our role is to help universities navigate that unevenness. In learning and teaching, this need is well understood, with our AI literacy programme for teaching staff well established. The REF AI findings make clear that similar support will be needed for research staff.
Why national action matters
If we leave GenAI use entirely to local experimentation, we will widen the digital divide between those who can invest in bespoke tools and those who cannot. The extent to which institutions can benefit from GenAI is tightly bound to their resources and existing expertise. A national research assessment exercise cannot afford to leave that unaddressed.
We also need to address research integrity, and that should be the foundation for anything we do next. If the sector wants a safe and fair path forward, then transparency must come first. That is why Recommendation 1 matters. The report suggests universities should consider steps such as:
define where GenAI can and cannot be used
require disclosure of GenAI involvement in REF related work
embed these decisions into their broader research integrity and ethics frameworks
As the report notes that current thinking about GenAI rarely connects with responsible research assessment initiatives such as DORA or CoARA, that gap has to close.
Creating the conditions for innovation
These steps do not limit innovation; they make innovation possible in a responsible way. At Jisc we already hear from institutions looking for advice on secure, trustworthy GenAI environments. They want support that will enable experimentation without compromising data protection, confidentiality or research ethics. They want clarity on how to balance efficiency gains with academic oversight. And they want to avoid replicating the mistakes of early digital adoption, where local solutions grew faster than shared standards.
The REF AI report gives the sector the evidence it needs to move from informal practice to a clear, managed approach.
The next REF will arrive at a time of major financial strain and major technological change. GenAI can help reduce burden and improve consistency, but only if it is used transparently and with a shared commitment to integrity. With the right safeguards, GenAI could support fairness in the assessment of UK research.
From Jisc’s perspective, this is the moment to work together. Universities need policies. Panels need guidance. And the sector will need shared infrastructure that levels the field rather than widening existing gaps.
Join HEPI and the University of Southampton for a webinar on Monday 10 November 2025 from 11am to 12pm to mark the launch of a new collection of essays, AI and the Future of Universities. Sign up now to hear our speakers explore the collection’s key themes and the urgent questions surrounding AI’s impact on higher education.
This blog was kindly authored by Richard Brown, Associate Fellow at the University of London’s School of Advanced Study.
Universities are on the front line of a new technological revolution. Generative AI (genAI) use (mainly large language mode-based chatbots like ChaptGPT and Claude) is almost universal among students. Plagiarism and accuracy are continuing challenges, and universities are considering how learning and assessment can respond positively to the daunting but uneven capabilities of these new technologies.
How genAI is transforming professional services
The world of work that students face after graduation is also being transformed. While it is unclear how much of the current slowdown in graduate recruitment can be attributed to current AI use, or uncertainty about its long-term impacts, it is likely that graduate careers will see great change as the technology develops. Surveys by McKinsey indicate that adoption of AI spread fastest between 2023/24 in media, communications, business, legal and professional services – the sectors with the highest proportions of graduates in their workforce (around 80 per cent in London and 60 per cent in the rest of the UK).
‘Human-centric’, a new report from the University of London looks at how AI is being adopted by professional service firms, and at what this might mean for the future shape and delivery of higher education.
The report identifies how AI is being adopted both through grassroots initiatives and corporate action. In some firms, genAI is still the preserve of ‘secret cyborgs’ – individual workers using chatbots under the radar. In others, task forces of younger workers have been deployed to find new uses for the tech to tackle chronic workflow problems or develop new services. Lawyers and accountants are codifying expertise into proprietary knowledge bases. These are private chatbots that minimise the risks of falsehood that still plague open systems, and offer potential to extend cheap professional-grade advice to many more people.
Graduate careers re-thought
What does this mean for graduate employment and skills? Many of the routine tasks frequently allocated to graduates can be automated through AI. This could be a doubled-edged sword. On the one hand, genAI may open up more varied and engaging ways for graduates to develop their skills, including the applied client-facing and problem-solving capabilities that underpin professional practice.
On the other hand, employers may question whether they need to employ as many graduates. Some of our interviewees talked of the potential for the ‘triangle’ structure of mass graduate recruitment being replaced by a ‘diamond-shaped’ refocus on mid-career hires. The obvious problem with this approach – of where mid-career hires will come from if there is no graduate recruitment – means that graduate recruitment is unlikely to dry up in the short term, but graduate careers may look very different as the knowledge economy is transformed.
The agile university in an age of career turbulence
This will have an impact on universities as well as employers. AI literacy, and the ability to use AI responsibly and authentically, are likely to become baseline expectations – suggesting that this should be core to university teaching and learning. Intriguingly, this is less about traditional computing skills and more about setting AI in context: research shows that software engineers were less in demand in early 2025 than AI ethicists and compliance specialists.
Broader ‘soft’ skills (what a previous University of London / Demos report called GRASP skills – general, relational, analytic, social and personal) will remain in demand, particularly as critical judgement, empathy and the ability to work as a team remain human-centric specialities. Employers also said that, while deep domain knowledge was still needed to assess and interrogate AI outputs, they were also looking for employees with a broader understanding of issues such as cybersecurity, climate regulation and ESG (Environmental, Social, and Governance), who could work across diverse disciplines and perspectives to create new knowledge and applications.
The shape of higher education may also need to change. Given the speed of advances in AI, it is likely that most propositions about which skills will be needed in the future may quickly become outdated (including this one). This will call for a more responsive and agile system, which can experiment with new course content and innovative teaching methods, while sustaining the rigour that underpins the value of their degrees and other qualifications.
As the Lifelong Learning Entitlement is implemented, the relationship between students and universities may also need to become more long-term, rather than an intense three-year affair. Exposure to the world of work will be important too, but this needs to be open to all, not just to those with contacts and social capital.
Longer term – beyond workplace skills?
In the longer term, all bets are off, or at least pretty risky. Public concerns (over everything from privacy, to corporate control, to disinformation, to environmental impact) and regulatory pressures may slow the adoption of AI. Or AI may so radically transform our world that workplace skills are no longer such a central concern. Previous predictions of technology unlocking a more leisured world have not been realised, but maybe this time it will be different. If so, universities will not just be preparing students for the workplace, but also helping students to prepare for, shape and flourish in a radically transformed world.
A growing share of colleges and universities are embedding artificial intelligence tools and AI literacy into the curriculum with the intent of aiding student success. A 2025 Inside Higher Ed survey of college provosts found that nearly 30 percent of respondents have reviewed curriculum to ensure that it will prepare students for AI in the workplace, and an additional 63 percent say they have plans to review curriculum for this purpose.
In the latest episode of Voices of Student Success, host Ashley Mowreader speaks with Shlomo Argamon, associate provost for artificial intelligence at Touro, to discuss the university policy for AI in the classroom, the need for faculty and staff development around AI, and the risks of gamification of education.
An edited version of the podcast appears below.
Q: How are you all at Touro thinking about AI? Where is AI integrated into your campus?
Shlomo Argamon, associate provost for artificial intelligence at Touro University
A: When we talk about the campus of Touro, we actually have 18 or 19 different campuses around the country and a couple even internationally. So we’re a very large and very diverse organization, which does affect how we think about AI and how we think about issues of the governance and development of our programs.
That said, we think about AI primarily as a new kind of interactive technology, which is best seen as assistive to human endeavors. We want to teach our students both how to use AI effectively in what they do, how to understand and properly mitigate and deal with the risks of using AI improperly, but above all, to always think about AI in a human context.
When we think about integrating AI for projects, initiatives, organizations, what have you, we need to first think about the human processes that are going to be supported by AI and then how AI can best support those processes while mitigating the inevitable risks. That’s really our guiding philosophy, and that’s true in all the ways we’re teaching students about AI, whether we’re teaching students specifically, deeply technical [subjects], preparing them for AI-centric careers or preparing them to use AI in whatever other careers they may pursue.
Q: When it comes to teaching about AI, what is the commitment you all make to students? Is it something you see as a competency that all students need to gain or something that is decided by the faculty?
A: We are implementing a combination—a top-down and a bottom-up approach.
One thing that is very clear is that every discipline, and in fact, every course and faculty member, will have different needs and different constraints, as well as competencies around AI that are relevant to that particular field, to that particular topic. We also believe there’s nobody that knows the right way to teach about AI, or to implement AI, or to develop AI competencies in your students.
We need to encourage and incentivize all our faculty to be as creative as possible in thinking about the right ways to teach their students about AI, how to use it, how not to use it, etc.
So No. 1 is, we’re encouraging all of our faculty at all levels to be thinking and developing their own ideas about how to do this. That said, we also believe very firmly that all students, all of our graduates, need to have certain fundamental competencies in the area of AI. And the way that we’re doing this is by integrating AI throughout our general education curriculum for undergraduates.
Ultimately, we believe that most, if not all, of our general education courses will include some sort of module about AI, teaching students specifically about the AI-relevant competencies that are relevant to those particular topics that they’re learning, whether it’s writing, reading skills, presentations, math, science, history, the different kinds of cognition and skills that you learn in different fields. What are the AI competencies that are relevant to that, and to have them learning that.
So No. 1, they’re learning it not all at once. And also, very importantly, it’s not isolated from the topics, from the disciplines that they’re learning, but it’s integrated within them so that they see it as … part of writing is knowing how to use AI in writing and also knowing how not to. Part of learning history is knowing how to use AI for historical research and reasoning and knowing how not to use it, etc. So we’re integrating that within our general education curriculum.
Beyond that, we also have specific courses in various AI skills, both at the undergraduate [and] at the graduate level, many of which are designed for nontechnical students to help them learn the skills that they need.
Q: Because Touro is such a large university and it’s got graduate programs, online programs, undergraduate programs, I was really surprised that there is an institutional AI policy.
A lot of colleges and universities have really grappled with, how do we institutionalize our approach to AI? And some leaders have kind of opted out of the conversation and said, “We’re going to leave it to the faculty.” I wonder if we could talk about the AI policy development and what role you played in that process, and how that’s the overarching, guiding vision when it comes to thinking about students using and engaging with AI?
A: That’s a question that we have struggled with, as all academic leaders, as you mentioned, struggle with this very question.
Our approach is to create policy at the institutional level that provides only the necessary guardrails and guidance that then enables each of our schools, departments and individual faculty members to implement the correct solutions for them in their particular areas, within this guidance and these guardrails so that it’s done safely and so that we know that it’s going, over all, in a positive and also institutionally consistent direction to some extent.
In addition, one of the main functions of my office is to provide support to the schools, departments and especially the faculty members to make this transition and to develop what they need.
It’s an enormous burden on faculty members to shift, not just to add AI content to their classes, if they do so, but to shift the way that we teach, the way that we do assessments. The way that we relate to our students, even, has to shift, to change, and it creates a burden on them.
It’s a process to develop resources, to develop ways of doing this. I and the people that work in our office, we have regular office hours to talk to faculty, to work with them. One of the most important things that we do, and we spend a lot of time and effort on this, is training for our faculty, for our staff on AI, on using AI, on teaching about AI, on the risks of AI, on mitigating those risks, how to think about AI—all of these things. It all comes down to making sure that our faculty and staff, they are the university, and they’re the ones who are going to make all of this a success, and it’s up to us to give them the tools that they need to make this a success.
I would say that while in many questions, there are no right or wrong answers, there are different perspectives and different opinions. I think that there is one right answer to “What does a university need to do institutionally to ensure success at dealing with the challenge of AI?” It’s to support and train the faculty and staff, who are the ones who are going to make whatever the university does a success or a failure.
Q: Speaking of faculty, there was a university faculty innovation grant program that sponsored faculty to take on projects using AI in the classroom. Can you talk a little bit about that and how that’s been working on campus?
A: We have an external donor who donated funds so that we were able to award nearly 100 faculty innovation challenge grants for developing methods of integrating AI into teaching.
Faculty members applied and did development work over the summer, and they’re now implementing in their fall courses right now. We’re right now going through the initial set of faculty reports on their projects, and we have projects from all over the university in all different disciplines and many different approaches to looking at how to use AI.
At the beginning of next spring, we’re going to have a conference workshop to bring everybody together so we can share all of the different ways that people try to do this. Some experiments, I’m sure, will not have worked, but that’s also incredibly important information, because what we’re seeking to do [is], we’re seeking to help our students, but we’re also seeking to learn what works, what doesn’t work and how to move forward.
Again, this goes back to our philosophy that we want to unleash the expertise, intelligence, creativity of our faculty—not top down to say, “We have an AI initiatives. This is what you need to be doing”—but, instead, “Here’s something new. We’ll give you the tools, we’ll give you the support. We’ll give you the funding to make something happen, make interesting things happen, make good things for your students happen, and then let’s talk about it and see how it worked, and keep learning and keep growing.”
Q: I was looking at the list of faculty innovation grants, and I saw that there were a few other simulations. There was one for educators helping with classroom simulations. There was one with patient interactions for medical training. It seems like there’s a lot of different AI simulations happening in different courses. I wonder if we can talk about the use of AI for experiential learning and why that’s such a benefit to students.
A: Ever since there’s been education, there’s been this kind of distinction between book learning and real-world learning, experiential learning and so forth. There have always been those who have questioned the value of a college education because you’re just learning what’s in the books and you don’t really know how things really work, and that criticism has some validity.
But what we’re trying to do and what AI allows us to do [is], it allows us and our students to have more and more varied experiences of the kinds of things they’re trying to learn and to practice what they’re doing, and then to get feedback on a much broader level than we could do before. Certainly, whenever you had a course in say, public speaking, students would get up, do some public speaking, get feedback and proceed. Now with AI, students can practice in their dorm rooms over and over and over again and get direct feedback; that feedback and those experiences can be made available then to the faculty member, who can then give the students more direct and more human or concentrated or expert feedback on their performance based on this, and it just scales.
In the medical field, this is where it’s hugely, hugely important. There’s a long-standing institution in medical education called the standardized patient. Traditionally it’s a human actor who learns to act as a patient, and they’re given the profile of what disorders they’re supposed to have and how they’re supposed to act, and then students can practice, whether they’re diagnostic skills, whether they’re questions of student care and bedside manner, and then get expert feedback.
We now have, to a large extent, AI systems that can do this, whether it’s interactive in a text-based simulation, voice-based simulation. We also have robotic mannequins that the students can work with that are AI-powered with AI doing conversation. Then they can be doing physical exams on the mannequins that are simulating different kinds of conditions, and again, this gives the possibility of really just scaling up this kind of experiential learning. Another kind of AI that has been found useful in a number of our programs, particularly in our business program, are AI systems that watch people give presentations and can give you real-time feedback, and that works quite well.
Q: These are interesting initiatives, because it cuts out the middleman of needing a third party or maybe a peer to help the student practice the experience. But in some ways, does it gamify it too much? Is it too much like video games for students? How have you found that these are realistic enough to prepare students?
A: That is indeed a risk, and one that we need to watch. As in nearly everything that we’re doing, there are risks that need to be managed and cannot be solved. We need to be constantly alert and watching for these risks and ensuring that we don’t overstep one boundary or another.
When you talk about the gamification, or the video game nature of this, the artificial nature of it, there are really two pieces to it. One piece is the fact that there is no mannequin that exists, at least today, that can really simulate what it’s like to examine a human being and how the human being might react.
AI chatbots, as good as they are, will not now and in the near, foreseeable future, at least, be able to simulate human interactions quite accurately. So there’s always going to be a gap. What we need to do, as with other kinds of education, you read a book, the book is not going to be perfect. Your understanding of the book is not going to be perfect. There has to be an iterative process of learning. We have to have more realistic simulations, different kinds of simulations, so the students can, in a sense, mentally triangulate their different experiences to learn to do things better. That’s one piece of it.
The other piece, when you say gamification, there’s the risk that it turns into “I’m trying to do something to stimulate getting the reward or the response here or there.” And there’s a small but, I think, growing research literature on gamification of education, where if you gamify a little bit too much, it becomes more like a slot machine, and you’re learning to maneuver the machine to give you the dopamine hits or whatever, rather than really learning the content of what you’re doing. The only solution to that is for us to always be aware of what we’re doing and how it’s affecting our students and to adjust what we’re doing to avoid this risk.
This goes back to one of the key points: Our whole philosophy of this is to always look at the technology and the tools, whether AI or anything else, as embedded within a larger human context. The key here is understanding when we implement some educational experience for students, whether it involves AI or technology or not, it’s always creating incentives for the students to behave in a certain way. What are those incentives, and are those incentives aligned with the educational objectives that we have for the students? That’s the question that we always need to be asking ourselves and also observing, because with AI, we don’t entirely know what those incentives are until we see what happens. So we’re constantly learning and trying to figure this out as we go.
If I could just comment on that peer-to-peer simulation: Medical students poking each other or social work students interviewing each other for a social work kind of exam has another important learning component, because the student that is being operated upon is learning what it’s like to be in the other shoes, what it’s like to be the patient, what it’s like to be the object of investigation by the professional. And empathy is an incredibly important thing, and understanding what it’s like for them helps the students to learn, if done properly, to do it better and to have the appropriate sort of relationship with their patients.
Q: You also mentioned these simulations give the faculty insight into how the student is performing. I wonder if we can talk about that; how is that real-time feedback helpful, not only for the student but for the professor?
A: Now, one thing that needs to be said is that it’s very difficult, often, to understand where all of your students are in the learning process, what specifically they need. We can be deluged by data, if we so choose, that may confuse more than enlighten.
That said, the data that come out of these systems can definitely be quite useful. One example is there are some writing assistance programs, Grammarly and their ilk, that can provide the exact provenance of writing assignments to the faculty, so it can show the faculty exactly how something was composed. Which parts did they write first? Which parts did they write second? Maybe they outlined it, then they revised this and they changed this, and then they cut and pasted it from somewhere else and then edited.
All of those kinds of things that gives the faculty member much more detailed information about the student’s process, which can enable the faculty to give the students much more precise and useful feedback on their own learning. What do they perhaps need to be doing differently? What are they doing well? And so forth. Because then you’re not just looking at a final paper or even at a couple of drafts and trying to infer what the student was doing so that you can give them feedback, but you can actually see that more or less in real time.
That’s the sort of thing where the data can be very useful. And again, I apologize if I sound like a broken record. It all goes back to the human aspect of this, and to use data that helps the faculty member to see the individual student with their own individual ways of thinking, ways of behaving, ways of incorporating knowledge, to be able to relate to them more as an individual.
Briefly and parenthetically, one of the great hopes that we have for integrating AI into the educational process is that AI can help to take away many of the bureaucratic and other burdens that faculty are burdened with, and free them and enable them in different ways to enhance their human relationship with their students, so that we can get back to the core of education. Which really, I believe, is the transfer of knowledge and understanding through a human relationship between teacher and student.
It’s not what might be termed the “jug metaphor” for education, where I, the faculty member, have a jug full of knowledge, and I’m going to pour it into your brain, but rather, I’m going to develop a relationship with you, and through this relationship, you are going to be transformed, in some sense.
Q: This could be a whole other podcast topic, but I want to touch on this briefly. There is a risk sometimes when students are using AI-powered tools and faculty are using AI-powered tools that it is the AI engaging with itself and not necessarily the faculty with the students. When you talk about allowing AI to lift administrative burdens or ensure that faculty can connect with students, how can we make sure that it’s not robot to robot but really person to person?
A: That’s a huge and a very important topic, and one which I wish that I had a straightforward and direct and simple answer for. This is one of those risks that has to be mitigated and managed actively and continually.
One of the things that we emphasize in all our trainings for faculty and staff and all our educational modules for students about AI is the importance of the AI assisting you, rather than you assisting the AI. If the AI produces some content for you, it has to be within a process in which you’re not just reviewing it for correctness, but you’re producing the content where it’s helping you to do so in some sense.
That’s a little bit vague, because it plays out differently in different situations, and that’s the case for faculty members who are producing a syllabus or using AI to produce other content for the courses to make sure that it’s content that they are producing with AI. Same thing for the students using AI.
For example, our institutional AI policy having to do with academic honesty and integrity, is, I believe, groundbreaking in the sense that our default policy for courses that don’t have a specific policy regarding the use of AI in that course—by next spring, all courses must have a specific policy—is that AI is allowed to be used by students for a very wide variety of tasks on their assignments.
You can’t use AI to simply do your assignment for you. That is forbidden. The key is the work has to be the work of the student, but AI can be used to assist. Through establishing this as a default policy—which faculty, department chairs, deans have wide latitude to define more or less restrictive policies with specific carve-outs, simply because every field is different and the needs are different—the default and the basic attitude is, AI is a tool. You need to learn to use it well and responsibly, whatever you do.
Q: I wanted to talk about the future of AI at the university. Are there any new initiatives you should tell our listeners about? How are you all thinking about continuing to develop AI as a teaching and learning tool?
A: It’s hard for me to talk about specific initiatives, because what we’re doing is we believe that it’s AI within higher education particularly, but I think in general as well, it’s fundamentally a start-up economy in the sense that nobody, and I mean nobody, knows what to do with it, how to deal with it, how does it work? How does it not work?
Therefore, our attitude is that we want to have it run as many experiments as we can, to try as many different things as we can, different ways of teaching students, different ways of using AI to teach. Whether it’s through simulations, content creation, some sort of AI teaching assistants working with faculty members, whether it’s faculty members coming up with very creative assignments for students that enable them to learn the subject matter more deeply by AI assisting them to do very difficult tasks, perhaps, or tasks that require great creativity, or something like that.
The sky is the limit, and we want all of our faculty to experiment and develop. We’re seeking to create that within the institution. Touro is a wonderful institution for that, because we already have the basic institutional culture for this, to have an entrepreneurial culture within the university. So the university as a whole is an entrepreneurial ecosystem for experimenting and developing ways of teaching about and with and through AI.
This blog has been kindly written for HEPI by Richard Watermeyer (Professor of Higher Education and Co-Director of the Centre for Higher Education at the University of Bristol), Tom Crick (Professor of Digital Policy at Swansea University) and Lawrie Phipps (Professor of Digital Leadership at the University of Chester and Senior Research Lead at Jisc).
On Tuesday, HEPI and Cambridge University Press & Assessment will be hosting the UK launch of the OECD’s Education at a Glance. On Wednesday, we will be hosting a webinar on students’ cost of living with TechnologyOne – for more information on booking a free place, see here.
For as long as there has been national research assessment exercises (REF, RAE or otherwise), there have been efforts to improve the way with which research is evaluated and Quality Related (QR) research funding consequently distributed. Where REF2014 stands out for its introduction of impact as a measure of what counts as research excellence, for REF2029, it has been all about research culture. Though where impact has become an integral dimension of the REF, the installation of research culture (into a far weightier environment or as has been proposed People, Culture and Environment (PCE) statement) as a criterion of excellence appears far less assured, especially when set against a three-month extension to REF2029 plans.
A temporary pause on proceedings has been announced by Sir Patrick Vallance, the UK Government’s Minister for Science, as a means to ensure that the REF provides ‘a credible assessment of quality’. The corollary of such is that the hitherto proposed formula (many parts of which remain formally undeclared – much to the frustration of universities’ REF personnel and indeed researchers) is not quite fit for purpose, and certainly not so if the REF is to ‘support the government’s economic and social missions’. Thus, it may transpire that research culture is ultimately downplayed or omitted from the REF. For some, this volte face, if it materialises, may be greeted with relief; a pragmatic step-back from the jaws of an accountability regime that has become excessively complex, costly and inefficient (if not even estranged from the core business of evaluating and then funding so-called ‘excellent’ research) and despite proclamations at the conclusion of its every instalment, that next time it will be less burdensome.
While the potential backtrack on research culture and potential abandonment of PCE statements will be focused on to explain the REF’s most recent hiatus, these may be only cameos to discussion of its wider credibility and utility; a discussion which appears to be reaching apotheosis, not least given the financial difficulties endemic to the UK sector, which the REF, with its substantial cost, is counted as further exacerbating. Moreover, as we are finding in our current research, the REF may have entered a period not limited to incremental reform and tinkering at the edges but wholesale revision; and this as a consequence of higher education’s seemingly unstoppable colonisation by artificial intelligence.
With recent funding from Research England, we have undertaken to consult with research leaders and specialist REF personnel embedded across 17 UK HEIs – including large, research-intensive institutions and those historically with a more modest REF footprint, to gain an understanding of existing views of and practices in the adoption of generative AI tools for REF purposes. While our study has thrown up multiple views as to the utility and efficacy of using generative AI tools for REF purposes, it has nonetheless revealed broad consensus that the REF will inevitably become more AI-infused and enabled, if not ultimately, if it is to survive, entirely automated. The use of generative AI for purposes of narrative generation, evidence reconnaissance, and scoring of core REF components (research outputs and impact case studies) have all been mooted as potential applications with significant cost and labour-saving affordances and applications which might also get closer to ongoing, real-time assessments of research quality, unrestricted to seven-year assessment cycles. Yet the use of generative AI has also been (often strongly) cautioned against for the myriad ways with which it is implicated and engendered with bias and inaccuracy (as a ‘black box’ tool) and can itself be gamed in multiple ways, for instance in ‘adversarial white text’. This is coupled with wider ongoing scientific and technical considerations regarding transparency, provenance and reproducibility. Some even interpret its use as antithetical to the terms of responsible research evaluation set out by collectives like CoARA and COPE.
Notwithstanding, such various objections, we are witnessing these tools being used extensively (if in many settings tacitly and tentatively) by academics and professional services staff involved in REF preparations. We are also being presented with a view that the use of GenAI tools by REF panels in four years’ time is a fait accompli, especially given the speed by which the tools are being innovated. It may even be that GenAI tools could be purposed in ways that circumvent the challenges of human judgement, the current pause intimates, in the evaluation of research culture. Moreover, if the credibility and integrity of the REF ultimately rests in its capacity to demonstrate excellence via alignment with Government missions (particularly ‘R&D for growth’), then we are already seeing evidence of how AI technologies can achieve this.
While arguments have been previously made that the REF offers good value for (public) money, the immediate joint contexts of severe financial hardship for the sector; ambivalence as to the organisational credibility of the REF as currently proposed; and the attractiveness of AI solutions may produce a new calculation. This is a calculation, however, which the sector must own, and transparently and honestly. It should not be wholly outsourced, and especially not to one of a small number of dominant technology vendors. A period of review must attend not only to the constituent parts of the REF but how these are actioned and responded to. A guidebook for GenAI use in the REF is exigent and this must place consistent practice at its heart. The current and likely escalating impact of Generative AI on the REF cannot be overlooked if it is to be claimed as a credible assessment of quality. The question then remains: is three months enough?
Notes
The REF-AI study is due to report in January 2026. It is a research collaboration between the universities of Bristol and Swansea and Jisc.
With generous thanks to Professor Huw Morris (UCL IoE) for his input into earlier drafts of this article.
As the writing across the curriculum and writing center coordinator on my campus, faculty ask me how to detect their students’ use of generative AI and how to prevent it. My response to both questions is that we can’t.
In fact, it’s becoming increasingly hard to not use generative AI. Back in 2023, according to a student survey conducted on my campus, some students were nervous to even create ChatGPT accounts for fear of being lured into cheating. It used to be that a student had to seek it out, create an account and feed it a prompt. Now that generative AI is integrated into programs we already use—Word (Copilot), Google Docs (Gemini) and Grammarly—it’s there beckoning us like the chocolate stashed in my cupboard does around 9 p.m. every night.
A recent GrammarlyGO advertisement emphasizes the seamless integration of generative AI. In the first 25 seconds of this GrammarlyGO ad, a woman’s confident voice tells us that GrammarlyGO is “easy to use” and that it’s “easy to write better and faster” with just “one download” and the “click of a button.” The ad also seeks to remove any concerns about generative AI’s nonhumanness and detectability: it’s “personalized to you”; “understands your style, voice and intent so your writing doesn’t sound like a robot”; and is “custom-made.” “You’re in control,” and “GrammarlyGO helps you be the best version of yourself.” The message: Using GrammarlyGO’s generative AI to write is not cheating, it’s self-improvement.
This ad calls to my mind the articles we see every January targeting those of us who want to develop healthy habits. The ones that urge us to sleep in our gym clothes if we want to start a morning workout routine. If we sleep in our clothes, we’ll reduce obstacles to going to the gym. Some of the most popular self-help advice focuses on the role of reducing friction to enable us to build habits that we want to build. Like the self-help gurus, GrammarlyGO—and all generative AI companies—are strategically seeking to reduce friction by reducing time (“faster), distance (it’s “where you write”) and effort (it’s “easy”!).
Where does this leave us? Do we stop assigning writing? Do we assign in-class writing tests? Do we start grading AI-produced assignments by providing AI-produced feedback?
Nope.
If we recognize the value of writing as a mode of thinking and believe that effective writing requires revision, we will continue to assign writing. While there is a temptation to shift to off-line, in-class timed writing tests, this removes the opportunity for practicing revision strategies and disproportionately harms students with learning disabilities, as well as English language learners.
Instead, like Grammarly, we can tap into what the self-help people champion and engage in what organizational behavior researchers Hayagreeva Rao and Robert I. Sutton call “friction fixing.” In The Friction Project (St. Martin’s Press, 2024), they explain how to “think and live like a friction fixer who makes the right things easier and the wrong things harder.” We can’t ban AI, but we can friction fix by making generative AI harder to use and by making it easier to engage in our writing assignments. This does not mean making our writing assignments easier! The good news is that this approach draws on practices already central to effective writing instruction.
After 25 years of working in writing centers at three institutions, I’ve witnessed what stalls students, and it is rarely a lack of motivation. The students who use the writing center are invested in their work, but many can’t start or get stuck. Here are two ways we can decrease friction for writing assignments:
Break research projects into steps and include interim deadlines, conferences and feedback from you or peers. Note that the feedback doesn’t have to be on full drafts but can be on short pieces, such as paragraph-long project proposals (identify a problem, research question and what is gained if we answer this research question).
Provide students with time to start on writing projects in class. Have you ever distributed a writing assignment, asked, “any questions?” and been met with crickets? If we give students time to start writing in class, we or peers can answer questions that arise, leaving students to feel more confident that they are going in the right direction and hopefully less likely to turn to AI.
There are so many ways we faculty (unintentionally) make our assignments uninviting: the barrage of words on a page, the lack of white space, our practice of leading with requirements (citation style, grammatical correctness), the use of SAT words or discipline-specific vocabulary for nonmajors: All this can signal to students that they don’t belong even before they’ve gotten started. Sometimes, our assignment prompts can even sound annoyed, as our frustration with past students is misdirected toward current students and manifests as a long list of don’ts. The vibe is that of an angry Post-it note left for a roommate or partner who left their dishes in the sink … again!
What if we were to reconceive our assignments as invitations to a party instead? When we design a party invitation, we have particular goals: We want people to show up, to leave their comfort zones and to be open to engaging with other people. Isn’t that what we want from our students when we assign a writing project?
If we designed writing assignments as invitations rather than assessments, we would make them visually appealing and use welcoming language. Instead of barraging students with all the requirements, we would foreground the enticing facets of the assignment. De-emphasize APA and MLA formatting and grammatical correctness and emphasize the purpose of the assignment. The Transparency in Learning and Teaching in Higher Education framework is useful for improving assignment layout.
write in authentic genres and for real-world audiences;
share their writing in and beyond the classroom;
receive feedback on drafts from their professors and peers that builds on their strengths and provides specific tasks for how to improve their pieces; and
understand the usefulness of a writing project in relation to their future goals.
Much of this is confirmed by a three-year study conducted at three institutions that asked seniors to describe a meaningful writing project. If assignments are inviting and meaningful, students are more likely to do the hard work of learning and writing. In short, we can decrease friction preventing engagement with our assignments by making them sound inviting, by using language and layouts that take our audience into consideration, and by designing assignments that are not just assessments but opportunities to explore or communicate.
How then do we create friction when it comes to using generative AI? As a writing instructor, I truly believe in the power of writing to figure out what I think and to push myself toward new insights. Of course, this is not a new idea. Toni Morrison explains, “Writing is really a way of thinking—not just feeling but thinking about things that are disparate, unresolved, mysterious, problematic or just sweet.” If we can get students to truly believe this by assigning regular low-stakes writing and reinforcing this practice, we can help students see the limits of outsourcing their thinking to generative AI.
As generative AI emerged, I realized that even though my writing courses are designed to promote writing to think, I don’t explicitly emphasize the value of writing as mode of discovery, so I have rewritten all my freewrite prompts so that I drive this point home: “This is low-stakes writing, so don’t worry about sentence structure or grammar. Feel free to write in your native language, use bullet points, or speech to text. The purpose of this freewriting is to give you an opportunity to pause and reflect, make new connections, uncover a new layer of the issue, or learn something you didn’t know about yourself.” And one of my favorite comments to give on a good piece of writing is “I enjoy seeing your mind at work on the page here.”
Additionally, we can create friction by getting to know our students and their writing. We can get to know their writing by collecting ungraded, in-class writing at the beginning of the semester. We can get to know our students by canceling class to hold short one-on-one or small group conferences. If we have strong relationships with students, they are less likely to cheat intentionally. We can build these bonds by sharing a video about ourselves, writing introductory letters, sharing our relevant experiences and failures, writing conversational feedback on student writing, and using alternative grading approaches that enable us to prioritize process above product.
There are no “AI-proof” assignments, but we can also create friction by assigning writing projects that don’t enable students to rely solely on generative AI, such as zines, class discussions about an article or book chapter, or presentations: Generative AI can design the slides and write the script, but it can’t present the material in class. Require students to include interactive components to their presentations so that they engage with their audiences. For example, a group of my first-year students gave a presentation on a selection from Jonathan Haidt’s The Anxious Generation, and they asked their peers to check their phones for their daily usage report and to respond to an anonymous survey.
Another group created a game, asking the class to guess which books from a display had been banned at one point or another. We can assign group projects and give students time to work on these projects in class; presumably, students will be less likely to misuse generative AI if they feel accountable in some way to their group. We can do a demonstration for students by putting our own prompts through generative AI and asking students to critique the outputs. This has the two-pronged benefit of demonstrating to students that we are savvy while helping them see the limitations of generative AI.
Showing students generative AI’s limitations and the harm it causes will also help create friction. Generative AI’s tendency to hallucinate makes it a poor tool for research; its confident tone paired with its inaccuracy has earned it the nickname “bullshit machine.” Worse still are the environmental costs, the exploitation of workers, the copyright infringement, the privacy concerns, the explicit and implicit biases, the proliferation of mis/disinformation, and more. Students should be given the opportunity to research these issues for themselves so that they can make informed decisions about how they will use generative AI. Recently, I dedicated one hour of class time for students to work in groups researching these issues and then present what they found to the class. The students were especially galled by the privacy violations, the environmental impact and the use of writers’ and artists’ work without permission or compensation.
When we focus on catching students who use generative AI or banning it, we miss an opportunity to teach students to think critically, we signal to students that we don’t trust them and we diminish our own trustworthiness. If we do some friction fixing instead, we can support students as they work to become nimble communicators and critical users of new technologies.
Catherine Savini is the Writing Across the Curriculum coordinator, Reading and Writing Center coordinator, and a professor of English at Westfield State University. She enjoys designing and leading workshops for high school and university educators on writing pedagogy.
By Michael Grove, Professor of Mathematics and Mathematics Education and Deputy Pro-Vice-Chancellor (Education Policy and Academic Standards) at the University of Birmingham.
We are well beyond the tipping point. Students are using generative AI – at scale. According to HEPI’s Student Generative AI Survey 2025, 92% of undergraduates report using AI tools, and 88% say they’ve used them in assessments. Yet only a third say their institution has supported them to use these tools well. For many, the message appears to be: “you’re on your own”.
The sector’s focus has largely been on mitigating risk: rewriting assessment guidance, updating misconduct policies, and publishing tool-specific statements. These are necessary steps, but alone they’re not enough.
Students use generative AI not to cheat, but to learn. But this use is uneven. Some know how to prompt effectively, evaluate outputs, and integrate AI into their learning with confidence and control. Others don’t. Confidence, access, and prior exposure all vary, by discipline, gender, and background. If left unaddressed, these disparities risk becoming embedded. The answer is not restriction, but thoughtful design that helps all students develop the skills to use AI critically, ethically, and with growing independence.
If generative AI is already reshaping how students learn, we must design for that reality and start treating it as a literacy to be developed. This means moving beyond module-level inconsistency and toward programme-level curriculum thinking. Not everywhere, not all at once – but with intent, clarity, and care.
We need programme-level thinking, not piecemeal policy
Most universities now have institutional policies on AI use, and many have updated assessment regulations. But module-by-module variation remains the norm. Students report receiving mixed messages – encouraged to use AI in one context, forbidden in another, ignored in a third, and unsure in a fourth. This inconsistency leads to uncertainty and undermines both engagement and academic integrity.
A more sustainable approach requires programme-level design. This means mapping where and how generative AI is used across a degree, setting consistent expectations and providing scaffolded opportunities for students to understand how these tools work, including how to use them ethically and responsibly. One practical method is to adopt a ‘traffic light’ or five-level framework to indicate what kinds of AI use are acceptable for each assessment – for example, preparing, editing, or co-creating content. These frameworks need not be rigid, but they must be clear and transparent for all.
Such frameworks can provide consistency, but they are no silver bullet. In practice, students may interpret guidance differently or misjudge the boundaries between levels. A traffic-light system risks oversimplifying a complex space, particularly when ‘amber’ spans such a broad and subjective spectrum. Though helpful for transparency, they cannot reliably show whether guidance has been followed. Their value lies in prompting discussion and supporting reflective use.
Design matters more than detection
Rather than relying on unreliable detection tools or vague prohibitions, we must design assessments and learning experiences that either incorporate AI intentionally or make its misuse educationally irrelevant.
This doesn’t mean lowering standards. It means doubling down on what matters in a higher education learning experience: critical thinking, explanation, problem-solving, and the ability to apply knowledge in unfamiliar contexts. In my own discipline of mathematics, students might critique AI-generated proofs, identify errors, or reflect on how AI tools influenced their thinking. In other disciplines, students might compare AI outputs with academic sources, or use AI to explore ideas before developing their own arguments.
We must also protect space for unaided work. One model is to designate a proportion of each programme as ‘Assured’ – learning and assessment designed to demonstrate independent capability, through in-person, oral, or carefully structured formats. While some may raise concerns that this conflicts with the sector’s move toward more authentic, applied assessment, these approaches are not mutually exclusive. The challenge is to balance assured tasks with more flexible, creative, or AI-enabled formats. The rest of the curriculum can then be ‘Exploratory’, allowing students to explore AI more openly, and in doing so, broaden their skills and graduate attributes.
Curriculum design should reflect disciplinary values
Not all uses of AI are appropriate for all subjects. In mathematics, symbolic reasoning and proof can’t simply be outsourced. But that should not mean AI has no role. It can help students build glossaries, explore variants of standard problems, or compare different solution strategies. It can provoke discussion, encourage more interactive forms of learning, and surface misconceptions.
These are not abstract concerns; they are design-led questions. Every discipline must ask:
What kind of skills, thinking and communication do we value?
How might AI support, or undermine, those aims?
How can we help students understand the difference?
These reflections play out differently across subject areas. As recent contributions by Nick Hillman and Josh Freeman underline, generative AI is prompting us to reconsider not just how students learn, but what now actually counts as knowledge, memory, or understanding.
Without a design-led approach, AI use will default to convenience, putting the depth, rigour, and authenticity of the higher education learning experience at risk for all.
Students need to be partners in shaping this future. Many already have deep, practical experience with generative AI and can offer valuable insight into how these tools support, or disrupt, real learning. Involving students in curriculum design, guidance, and assessment policy will help ensure our responses are relevant, authentic, and grounded in the realities of how they now learn.
A call to action
The presence of generative AI in higher education is not a future scenario, it is the present reality. Students are already using these tools, for better and for worse. If we leave them to navigate this alone, we risk widening divides, losing trust, and missing the opportunity to improve how we teach, assess, and support student learning.
What’s needed now is a shift in narrative:
From panic to pedagogy
From detection to design
From institutional policy to consistent programme-level practice.
Generative AI won’t replace teaching. But it will reshape how students learn. It’s now time we help them do so with confidence and purpose, through thoughtful programme-level design.
Over the last two years, I’ve witnessed the rise in students’ use of generative AI as whole. Not surprisingly, more students are using generative AI to assist them in writing.
In an undergraduate business communication course that I oversee, the percentage of students who declared their use of generative AI for a writing assessment (i.e. business proposal) increased steadily over four semesters from 35% in 2023 to 61% in 2025. What’s more fascinating is the corresponding increase in the reported use of generative AI for their spoken assessment – their presentation (i.e. pitch) – from only 18% in 2023 to 43% in 2025.
*Note that there were about 350 students per semester and a total of about 1,400 students over four semesters/two years.
You may be wondering, how exactly are these students using generative AI for presentations?
They reported using generative AI to:
Create and edit visuals (e.g. images, prototypes/ mockups, logos)
Get inspiration for rhetorical devices (e.g. taglines, stories, alliterations)
Prepare for the Q&A (e.g. generate questions, review/structure answers)
Beyond verbal language,visuals are an important facet of communication and students need to be prepared for more multimodal communication tasks at the workplace (Brumberger, 2005). With digital media, there has been a shift in balance between words and images (Bolter, 2003) which can be seen in websites, reports and even manuals. A students’ ability to communicate in writing and speaking must now be complemented with a proficiency in visual language. Now, generative AI can reduce those barriers to creative visual expression (Ali et al., 2004).
For example, students on my business communication course use AI tools to create prototypes and mockups of their project ideas to complement their explanations. When they are unable to generate exactly what they need, they edit those images with traditional editing software or more recently, software with generative AI editing abilities such as Adobe firefly which allows users to select specific areas of an image and use “generative fill” to brainstorm and edit it without advanced technical skills. This and other AI text generators including Dall-e (Openart) and Midjourney have opened up possibilities for communicators to enhance their message using visuals.
Here are the AI Visual Tools students have reported using in their written and spoken assessment over two years:
What’s interesting from the list is not only an increase in the number of AI tools used but the type of tools used (1) for specific purposes like Logopony, for the creation of logos, Usegalileo, for app Interface designs, and Slidesgo, for the creation of slides, as well as AI tools (2) for editing such as Photoshop AI, Adobe Firefly, and Canva. Beyond that, we can see how students are using different tools from companies that are constantly evolving such as Canva with Magic Studio and Dream Lab, OpenAI who has integrated Dall-e into ChatGPT, as well as their latest offering, Sora, and even Google who now has Gemini Flash 2.0. Generative AI is also becoming more accessible on different platforms with the integration of Meta AI to WhatsApp, a cross-platform messaging app.
Ultimately, this list provides a glimpse of what some undergraduate business students are dabbling with and educators should consider trying them out. More importantly, we could guide students in thinking about the visuals and graphics that they ultimately use because not all graphics are equally effective (Mayer and Moreno 2003).
Some graphics are:
Decorative They are neutral and may enhance the aesthetics but is not interesting or directly relevant.
Seductive They may be highly interesting but may not be directly relevant and can distract the audience and cause their cognitive processing to focus on irrelevant material.
Instructive They are directly relevant to the topic (Sung and Maye, 2012).
However, it doesn’t mean that all visuals should be instructive because it depends on the goal of the communicator. For example, if the main goal is for enjoyment, then decorative visuals can enhance the aesthetics and seductive visuals can be so interesting that it leads to higher satisfaction, so we should remind students to be intentional in their use of visuals and AI tools. For example, AI tools tend to create visuals with a lot of extraneous details that may be distracting and lead to cognitive overload (Deleeuw and Mayer 2008) so students should refine their prompts by being more specific and precise (Hwang and Wu 2024) and they should be prepared to use editing software which can include other AI enhanced software like Adobe firefly and Imagen to achieve their final goal.
There are limitations to what AI can do at the moment.
It cannot be truly innovative because it learns from existing data.
It cannot fully understand subtle aspects like culture, values or emotional nuances (Hwang and Wu 2024).
But it can provide the stepping stone for students to visualize their ideas.
Let’s encourage our students to be aware of what they want to achieve when using AI tools and be proactive in selecting, rearranging, editing and refining the visuals to suit their purposes.
Aileen Wanli Lam is a Senior Lecturer and technology enthusiast at the National University of Singapore. She is fascinated by education technology and enjoys conversations about the latest industry developments. She is also passionate about professional communications, student engagement and educational leadership.
References
Ali, Safinah, Prerna Ravi, Randi Williams, Daniella DiPaola, and Cynthia Breazeal. “Constructing dreams using generative AI.” In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 38, no. 21, pp. 23268-23275. 2024.
Bolter, Jay David. “Critical theory and the challenge of new media.” (2003).
Brumberger, Eva R. “Visual rhetoric in the curriculum: Pedagogy for a multimodal workplace.” Business Communication Quarterly 68, no. 3 (2005): 318-333.
DeLeeuw, Krista E., and Richard E. Mayer. “A comparison of three measures of cognitive load: Evidence for separable measures of intrinsic, extraneous, and germane load.” Journal of educational psychology 100, no. 1 (2008): 223.
Hwang, Younjung, and Yi Wu. “Methodology for Visual Communication Design Based on Generative AI.” International journal of advanced smart convergence 13, no. 3 (2024): 170-175.
Mayer, Richard E., and Roxana Moreno. “Nine ways to reduce cognitive load in multimedia learning.” Educational psychologist 38, no. 1 (2003): 43-52.
Sung, Eunmo, and Richard E. Mayer. “When graphics improve liking but not learning from online lessons.” Computers in human behavior 28, no. 5 (2012): 1618-1625.
Preparing for an AI-Powered Evolution in How Students Search
If you’ve ever been involved in your institution’s digital marketing efforts, you’ve undoubtedly heard of search engine optimization — otherwise known as SEO.
But after more than a decade of optimizing keywords and backlinks in content for search engines like Google and Bing, we’re now at the dawn of a new age spurred on by artificial intelligence (AI) and a new approach is required: generative engine optimization (GEO).
As prospective students turn to AI tools and large language models (LLMs) to guide their college search, traditional SEO tactics are no longer enough. Digital marketing teams must also incorporate new GEO-focused tactics into their strategies.
In an increasingly competitive and LLM-driven world, institutions must now rethink their visibility, branding, and recruitment strategies for a digital landscape that continues to evolve.
Understanding Generative Engines and Their Impact on Students’ Search Behavior
Generative engine optimization is emerging as a critical response to the way AI is reshaping how prospective students find and evaluate colleges. Unlike traditional search engines, generative engines powered by large language models deliver conversational, synthesized responses — often without requiring users to click through to a website.
This shift is impacting how institutions need to approach their digital visibility and student engagement efforts.
The Rise of LLMs
As students move away from traditional search engines toward AI search tools, LLMs and LLM-powered tools like ChatGPT, Claude, Perplexity, and Google’s Gemini and Search Generative Experience (SGE) are leading the way.
These platforms generate real-time, AI-powered answers that summarize information from across the web — often citing sources, but not always linking to them directly. Their growing popularity signals a move away from standard search engine results toward fluid, question-driven discovery.
The Impact of LLMs on Students’ Search Experiences
Prospective students are already turning to generative engines to ask nuanced questions such as, “What are the top 20 online MSW programs?” or “Which colleges have the best student support services for veterans?”
Instead of having to navigate a list of blue links, they’re receiving direct, synthesized answers to their questions. This introduces key shifts that digital marketers must consider, including:
Fewer clicks to their institution’s website
A higher priority for being cited in credible content
Reduced visibility in traditional search engine results pages (SERPs)
For colleges and universities, adapting to this new behavior is essential to staying prominent in students’ minds during their decision-making process.
SEO vs. GEO in Higher Education
Search engine optimization and generative engine optimization share a common goal: to ensure content is discoverable, relevant, and credible. Both approaches rely on strategic keyword usage, high-quality content, and data-driven refinement to increase visibility.
SEO was built for traditional search engines that return ranked lists of links. GEO is designed for AI-powered engines that synthesize information and deliver complete answers.
For universities, this change requires a new, blended approach — one that takes both SEO and GEO into account when creating admissions materials, program pages, and search rankings-focused content such as blog posts.
SEO vs. GEO: A Side-by-Side Comparison
Category
SEO (Search Engine Optimization)
GEO (Generative Engine Optimization)
Primary Goal
Ranks web pages in search engine results pages
Surfaces content in AI-generated, conversational responses
Keyword Strategy
Optimizes for exact-match and high-volume keywords
Focuses on semantic relevance and contextual cues
User Experience
Prioritizes site structure, navigation, and readability
Prioritizes clear, structured content that AI can easily parse
Tracks AI referrals, citation frequency, inclusion in LLM answers
Content Strategy
Page-level optimization for ranking
Multisource optimization for synthesis in AI-generated content
Adaptability Requirement
Evolves with search algorithm updates
Evolves with AI behavior, model updates, and platform preferences
User Search Experience
List of blue links with snippets
Zero-click answers, direct responses, and conversational recommendations
How Generative Engines Pull and Rank University Content
Generative engines like ChatGPT and Google’s SGE don’t rank web pages the same way traditional search engines do. Instead, they synthesize information from multiple sources to deliver a single, cohesive answer.
To be included in these AI-generated responses, university content needs to strike a balance between academic credibility and an accessible, student-friendly structure. AI prioritizes information that is well-organized, clearly written, and backed by authoritative sources, such as:
Research publications
Program pages
Institutional blogs
Institutions that prioritize clarity and credibility in their content are more likely to be cited and surfaced in generative search results.
Key GEO Strategies for Colleges and Universities
To stay visible in AI-driven searches, institutions need to adopt innovative content strategies tailored to how generative engines interpret and deliver information. Here are some core GEO tactics:
Showcase Faculty Within Content
Highlight faculty expertise in program pages, blog posts, and FAQs.
Link to faculty bios and published research to boost credibility.
Feature quotes, profiles, or insights to personalize academic offerings.
Ensure AI- and LLM-Friendly Structure and Markup
Use schema (structured data) markup to help AI understand the content’s context.
Organize pages with clear subheadings that mirror common student questions.
Example: “Is an Online EdD Respected?”
Boldface key points and use callout boxes to surface important information.
Design site architecture for easy crawling and content parsing.
Create Concise and Clear Content
Use conversational, student-centered language.
Write in short, scannable paragraphs with clear takeaways.
Address common and next-level questions.
Examples: “How long does it take to complete this degree?” and “What can I do with my degree after graduation?”
Use Content Formats That Perform Well in GEO
Incorporate career outcome and salary tables.
Include degree comparisons.
Example: “MBA vs. MPA”
Use visuals and guides that explain steps for processes such as admissions, licensure, and the student journey.
Build Brand Authority and Trust
Invest in public relations campaigns to generate credible mentions and citations.
Maintain consistent messaging across the web, social media, and paid media.
Grow visibility through strategic content distribution and strong social media channels.
Measuring GEO Performance in Enrollment Marketing
As with every digital marketing initiative, it’s not enough to just roll out a GEO strategy — institutions need to measure its success. Here’s how it’s done in the GEO world:
Create LLM-Focused Dashboards via GA4 and Looker Studio
Institutions can build LLM-focused dashboards using Google Analytics 4 (GA4) and Looker Studio by creating filters for platforms like ChatGPT, Perplexity, Microsoft Copilot, Google Gemini, and Claude.
Google currently doesn’t provide direct data for AI Overview referrals, and they have been neutral in response to questions on if they will ever release AI Overview data.
While LLMs are still evolving, isolating referral traffic from these tools can provide institutions with early insight into how students are discovering their content through AI.
Use Attribution Models for AI-Influenced Student Journeys
To fully understand how GEO affects students’ enrollment behavior, marketers need to evolve their attribution models, or how enrollment conversions are attributed to different channels. AI-generated responses often play a role at the top of the enrollment funnel, influencing students before they ever land on a university’s website.
Measuring that influence through multitouch attribution and long-view funnel analysis will become increasingly important as AI tools reshape how students explore, compare, and commit to higher education programs.
Challenges and Ethical Considerations
As generative engines continue to shape how students discover universities, inherent challenges will likely arise.
AI tools can misrepresent data or present outdated information, raising concerns about their accuracy and whether they can be trusted. There’s also the risk that well-resourced, elite institutions may disproportionately dominate generative search results, reinforcing existing inequities. Lack of transparency in how algorithms surface and prioritize content makes it difficult for institutions to ensure they are receiving fair and accountable representation.
Future Trends in Higher Education GEO
When it comes to emerging digital marketing techniques like GEO, early investments can help institutions stay ahead of the curve.
Multimodal Optimization for Virtual Campus Tours and Visual Content
As generative engines evolve, optimizing for multimodal content — such as images, video, and virtual tours — will become increasingly important.
This goes beyond traditional desktop experiences. In Meta’s first quarter 2025 earnings call, Mark Zuckerberg predicted that smart glasses will eventually replace smartphones, describing them as ideal for AI and the metaverse.
With Meta already partnering with Ray-Ban on AI-integrated eyewear, higher ed marketers need to start preparing content that’s not just LLM-friendly but also immersive, interactive, and wearable-ready.
AI-Driven Personalization for Students
Rather than relying on static rankings or one-size-fits-all search experiences, AI is ushering in a wave of hyperpersonalization. Prospective students may soon interact with personalized advisors, see school rankings tailored to their goals, and receive customized digital content that aligns with their academic and career interests.
This shift will push institutions to deliver flexible, student-centered content that adapts to each individual’s intent and pathway.
Search by Outcome, Not Degree
Generative tools are beginning to trace backward from desired career outcomes by identifying what roles successful professionals hold, then linking those roles to specific programs, professors, and institutions.
For colleges and universities, this means alumni outcomes, employer partnership information, and job title visibility are essential signals. Institutions that surface these elements clearly will be better positioned to show up in outcome-based searches and AI-generated guidance.
Ready to Get Ahead of the Curve?
The use of AI and large language models in search is only going to increase, fundamentally reshaping how students discover, evaluate, and engage with higher education institutions.
Developing a strong generative engine optimization strategy is essential. GEO needs to be seamlessly integrated into your existing SEO and digital marketing efforts to ensure your institution stays visible and relevant in a rapidly shifting landscape.
With generative engines evolving at an unprecedented pace, now is the time to prepare for how you’ll reach the next generation of students.
Want to talk through how GEO fits into your broader enrollment strategy? Contact Archer Education to start the conversation.
To help folks think through what we should be considering regarding the impact on education of generative AI tools like large language models, I want to try a thought experiment.
Imagine if, in November 2022, OpenAI introduced ChatGPT to the world by letting the monster out of the lab for a six-week stroll, long enough to demonstrate its capacities—plausible automated text generation on any subject you can think of—and its shortfalls—making stuff up—and then coaxing the monster back inside before the villagers came after it with their pitchforks.
Periodically, as new models were developed that showed sufficient shifts in capabilities, the AI companies (OpenAI having been joined by others), would release public demonstrations, audited and certified by independent expert observers who would release reports testifying to the current state of generative AI technology.
What would be different? What could be different?
First, to extend the fantasy part of the thought experiment, we have to assume we would actually do stuff to prepare for the eventual full release of the technology, rather than assuming we could stick our heads in the sand until the actual day of its arrival.
So, imagine you were told, “In three years there will be a device that can create a product/output that will pass muster when graded against your assignment criteria.” What would you do?
A first impulse might be to “proof” the assignment, to make it so the homework machine could not actually complete it. You would discover fairly quickly that while there are certainly adjustments that can be made to make the work less vulnerable to the machine, given the nature of the student artifacts that we believe are a good way to assess learning—aka writing—it is very difficult to make an invulnerable assignment.
Or maybe you engaged in a strategic retreat, working out how students can do work in the absence of the machine, perhaps by making everything in class, or adopting some tool (or tools) that track the students’ work.
Maybe you were convinced these tools are the future and your job was to figure out how they can be productively integrated into every aspect of your and your students’ work.
Or maybe, being of a certain age and station in life, you saw the writing on the wall and decided it was time to exit stage left.
Given this time to prepare, let’s now imagine that the generative AI kraken is finally unleashed not in November 2022, but November 2024, meaning at this moment it’s been present for a little under six months, not two and a half years.
What would be different, as compared to today?
In my view, if you took any of the above routes, and these seem to be the most common choice, the answer is: not much.
The reason not much would be different is because each of those approaches—including the decision to skedaddle—accepts that the pre–generative AI status quo was something we should be trying to preserve. Either we’re here to guard against the encroachment of the technology on the status quo, or, in the case of the full embrace, to employ this technology as a tool in maintaining the status quo.
My hope is that today, given our two and a half years of experience, we recognize that because of the presence of this technology it is, in fact, impossible to preserve the pre–generative AI status quo. At the same time, we have more than info information to question whether or not there is significant utility for this technology when it comes to student learning.
This recognition was easier to come by for folks like me who were troubled by the status quo already. I’ve been ready to make some radical changes for years (see Why They Can’t Write: Killing the Five-Paragraph Essay and Other Necessities), but I very much understood the caution of those who found continuing value in a status quo that seemed to be mostly stable.
I don’t think anyone can believe that the status quo is still stable, but this doesn’t mean we should be hopeless. The experiences of the last two and a half years make it clear that some measure of rethinking and reconceiving is necessary. I go back to Marc Watkins’s formulation: “AI is unavoidable, not inevitable.”
But its unavoidability does not mean we should run wholeheartedly into its embrace. The technology is entirely unproven, and the implications of what is important about the experiences of learning are still being mapped out. The status quo being shaken does not mean that all aspects upon which that status quo was built have been rendered null.
One thing that is clear to me, something that is central to the message of More Than Words: How to Think About Writing in the Age of AI: Our energies must be focused on creating experiences of learning in order to give students work worth doing.
This requires us to step back and ask ourselves what we actually value when it comes to learning in our disciplines. There are two key questions which can help us:
What do I want students to know?
What do I want students to be able to do?
For me, for writing, these things are covered by the writer’s practice (the skills, knowledge, attitudes and habits of mind of writers). The root of a writer’s practice is not particularly affected by large language models. A good practice must work in the absence of the tool. Millions of people have developed sound, flexible writing practices in the absence of this technology. We should understand what those practices are before we abandon them to the nonthinking, nonfeeling, unable-to-communicate-with-intention automated syntax generator.
When the tool is added, it must be purposeful and mindful. When the goal of the experience is to develop one’s practice—where the experience and process matter more than the outcome—my belief is that large language models have very limited, if any, utility.
We may have occasion to need an automatic syntax generator, but probably not when the goal is learning to write.
We have another summer in front of us to think through and get at the root of this challenge. You might find it useful to join with a community of other practitioners as part of the Perusall Engage Book Event, featuring More Than Words, now open for registration.
I’ll be part of the community exploring those questions about what students should know and be able to do.
Remember when SEO was all about keywords and metatags, fueling now-defunct search engines like Yahoo, AltaVista and early Google? Those were the days of “keyword stuffing,” where quantity trumped quality and relevance, delivering poor search results and frustrating users. Google’s PageRank algorithm changed everything by prioritizing content quality, giving birth to the “Content is King” mantra and improving the user experience.
Fast forward to the Era of the Modern Learner, where digitally astute users demand fast and accurate information at their fingertips. To keep up with their heightened expectations, search engine algorithms have evolved to become more sophisticated, focusing on the intent behind each search query rather than simple keyword matching. This shift has led to the emergence of AI-powered search engines features like Google’s AI Overviews to provide an AI-powered summary which now command prime real estate on the search engine results page.
In response, Generative Engine Optimization (GEO) is emerging. AI-powered search engines are moving beyond simply ranking websites. They are synthesizing information to provide direct answers. In this fast-paced environment, delivering the right information at the right time is critical now more than ever. All marketers, regardless of industry, must adapt their strategies beyond traditional SEO.
What is Generative Engine Optimization (GEO)?
Artificial intelligence is rapidly infiltrating tools across every industry, fundamentally reshaping the digital landscape. Generative Engine Optimization (GEO) is emerging as a new approach to digital marketing, leveraging AI-powered tools to generate and optimize content for search engines. GEO is a catalyst, driving a fundamental shift in how search engines present information and how users consume it
GEO leverages machine learning algorithms to analyze user search intent, create personalized content, and optimize websites for improved search engine rankings. This advanced algorithmic approach delivers contextually rich information from credible sources, directly answering user searches and proactively addressing related inquiries. A proactive strategy that goes beyond traditional SEO ensures that a school’s information is readily discoverable, easily digestible and favorably presented by AI-powered search engines such as Google’s AI Overviews, ChatGPT, Perplexity and Gemini.
How GEO Works
At its core, Generative Engine Optimization (GEO) uses artificial intelligence to bridge the gap between user needs and search engine performance. GEO tools go beyond traditional SEO by harnessing AI to deeply understand user behavior and generate content that’s not only relevant but also personalized and performance driven. Here is how it works across four core functions:
Analyzing User Intent: GEO starts by analyzing user intent. AI models examine search queries, website behavior and browsing patterns to uncover what users are specifically searching for. This helps marketers develop content strategies that directly align with user expectations and needs.
Generating Content: Using these insights, GEO tools generate original content tailored to meet the precise needs of the target audience. The result is content that answers user questions and aligns with how modern search engines evaluate relevance and quality.
Optimizing Content: GEO then optimizes the generated content for performance. AI refines readability, integrates keywords and enhances structural elements for improved visibility in search results, which ensures that content performs well in both traditional and AI-powered search environments.
Personalizing Content: Where GEO truly shines is in content personalization. By leveraging data like demographics, preferences and past interactions, GEO delivers tailored experiences that feel more relevant and engaging to individual users.
Comparing SEO and GEO
While SEO and GEO may seem like competing strategies, they actually complement one another. Both aim to improve visibility in search results and drive meaningful engagement but do so through different methods. Understanding how they align and where they diverge is key to developing a modern, well-rounded digital strategy.
Ways GEO is Similar to SEO
Despite their difference in execution, SEO and GEO share a common goal: delivering valuable content to users and meeting their search intent. Both SEO and GEO strategies contribute to:
Improving website visibility and search rankings in the search engine results pages (SERPs).
Driving organic traffic by making it easier for users to discover relevant information.
Boosting user engagement and conversion rates through informative, well-tailored content.
Ways GEO is Different from SEO
Where SEO and GEO begin to diverge is in their focus, tools, and content strategy:
Focus: Traditional SEO emphasizes keyword optimization, meta tags and technical structure. GEO, on the other hand, focuses on understanding user intent and creating dynamic, personalized content that adapts to evolving needs.
Tools: SEO relies on tools like keyword research platforms, backlink analysis, and manual content audits. GEO uses AI-powered platforms to analyze data, generate content, and automate optimization based on real-time user behavior.
Content: SEO often produces static, evergreen content that ranks over time. GEO enables the creation of responsive, personalized content that can shift based on user preferences, past interactions, and demographics.
While SEO has historically focused on driving clicks to websites and increasing rankings, GEO recognizes the increasing prominence of zero-click searches—where users find answers directly within AI-powered search overviews. In this new reality, GEO ensures your content remains visible and valuable even when the traditional click doesn’t occur. It does this by optimizing for how AI synthesizes and presents information in search results.
Is GEO Replacing SEO?
The rise of GEO has sparked an important question for marketers: Is SEO dead? The short answer is no. Rather than replacing SEO, GEO enhances it.
GEO builds a foundation of traditional SEO by leveraging artificial intelligence to automate time-consuming tasks, deepen audience insights, and elevate content quality. A strong SEO strategy remains essential, and when paired with GEO, it becomes even more powerful.
To support marketers in building that foundation, tools like EducationDynamics’ SEO Playbook offer actionable strategies for mastering SEO fundamentals while staying adaptable to innovations like GEO. As the higher education marketing landscape evolves, institutions are reaching a critical inflection point: the status quo no longer meets the expectations of the Modern Learner, and a more dynamic, data-driven approach is essential to stay competitive.
Here’s how GEO supports and strengthens traditional SEO efforts:
Smarter Keyword Research and Optimization: GEO tools analyze search intent more precisely, allowing marketers to choose keywords that better reflect how real users search, creating content that directly answers those queries.
More Personalized Content Experiences: By generating dynamic content based on user behavior, preferences, and demographics, GEO helps ensure the right message reaches the right audience at the right time.
Streamlined Workflows: GEO automates content generation and optimization processes, making it easier to keep web pages fresh, relevant, and aligned with evolving search behaviors—all while saving time and resources.
SEO is far from obsolete; however, relying solely on traditional SEO tactics is outdated which is no longer sufficient in today’s evolving higher education landscape. To truly transform their marketing approach, institutions must embrace innovative solutions.
As generative AI becomes increasingly embedded in how people search, marketers must adapt. While traditional SEO tactics like on-page optimization, site structure, and link-building still have a role to play, GEO provides the bold innovation needed to drive impactful outcomes. By pairing SEO strategies with GEO’s AI-driven insights and automation, institutions can achieve greater efficiency and effectiveness in their marketing efforts.
Together, SEO and GEO provide a holistic, future-ready framework to engage the Modern Learner, enhance digital marketing efforts, and drive both reputation and revenue growth, which are essential for long-term success.
Integrating GEO and SEO in Your Marketing Strategy for Higher Education Marketers
As the digital landscape evolves, one thing remains clear: SEO is still essential for institutions looking to connect with today’s students. With the rapid adoption of AI in everyday search habits though, SEO alone is no longer enough.
According to EducationDynamics 2025 Engaging the Modern Learner Report, generative AI is already transforming how prospective students evaluate their options. Nearly 70% of Modern Learners use AI tools for generative chatbot platforms like ChatGPT, while 37% use these tools specifically to gather information about colleges and universities in their consideration set.
This shift signals a clear need for higher ed marketers to adapt their digital strategies. GEO provides a pathway to do that while better serving today’s students. By combining the proven fundamentals of SEO with GEO’s advanced AI capabilities, institutions can engage the Modern Learner more effectively at every stage of their decision-making journey.
Reaching Modern Learners: Integrating GEO and SEO Strategies
Speak to What Modern Learners Search For: Modern Learners expect content that speaks directly to their needs and interests. Use GEO tools to identify the actual search terms prospective students use, such as “flexible online MBA,” or “how much does an online degree cost.” Then, develop SEO-optimized pages, blog posts, and FAQs that address these specific questions. Incorporate schema markup, structured headings, and internal links to boost visibility while keeping content informative and student focused.
Personalize the Journey for Every Modern Learner: GEO enables marketers to go beyond generic messaging. Use behavioral data, such as which pages students visit, how long they stay or what programs they explore, to personalize touchpoints across channels. Personalization builds trust and shows Modern Learners you understand what matters to them.
Deliver the Seamless Digital Experiences Modern Learners Expect: Today’s students want fast, seamless experiences. Use GEO insights to identify where users drop off, then optimize navigation and page speed accordingly. Implement clear, scannable layouts with prominent CTAs to enhance your website’s structure and user-friendliness. Consider adding AI-powered chatbots to provide real-time support for everything from application steps to financial aid inquiries.
Use Data to Stay Ahead of the Modern Learner’s Needs: GEO tools give you visibility into what students search for, which content they engage with, and where they lose interest. Regularly review search patterns, click paths, and drop-off points to identify gaps in your content or barriers in the enrollment funnel. Use these insights to refine headlines, adjust keyword targeting, or introduce new resources that better align with what students care about.
As prospective students increasingly turn to AI tools to explore their options, higher education marketers must evolve their strategies to keep pace with changing search behaviors. While Search Engine Optimization remains essential for visibility and reach, it no longer fully reflects how today’s students search and engage online. GEO bridges that gap by adapting to real-time behaviors and preferences. To effectively connect with Modern Learners and stay competitive, institutions must evolve their digital strategies to include GEO.
The Future of SEO and GEO in Higher Education
The future of enrollment will be shaped by how well institutions adapt to evolving digital behaviors. GEO is one of the many new components at the forefront of this shift. As AI continues to reshape how students interact with institutions and search for information, GEO will become an instrumental tool for delivering personalized, real-time information to meet their expectations.
Traditional SEO will still play a vital role in ensuring your institution is discoverable, but GEO takes things further by extracting and tailoring relevant content to meet the specific needs of each user, creating dynamic, intent-driven engagement. With more students using generative AI tools to guide their enrollment journey, institutions must embrace strategies that reflect this new reality.
Looking ahead, AI-powered SEO strategies will empower higher education marketers to create adaptive content that speaks directly to individual student goals and behaviors. These tools will also make it possible to deliver faster, more relevant information across platforms, often surfacing answers before a student ever clicks a link. With deeper access to behavioral data and user intent, marketers can refine messaging in real time, ensuring they’re reaching the right students with the right information at the right moment in their decision-making journey.
Unlocking the Power of GEO with EducationDynamics
As the digital landscape continues to shift, it can be challenging for institutions to keep pace with rapid change—especially when it comes to reaching the demands of today’s students. GEO empowers institutions to transform their digital engagement strategies, moving beyond outdated tactics to cultivate meaningful connections with the Modern Learner.
As a leading provider of higher education marketing solutions, EducationDynamics specializes in helping colleges and universities stay ahead. Our team brings deep expertise in foundational SEO and is actively embracing the next wave of digital strategy through Generative Engine Optimization (GEO). We understand what it takes to create meaningful engagement in a competitive enrollment environment and we’re here to help you do just that.
Connect with us to discover how we can support your team in building personalized digital strategies—whether it’s laying the groundwork with SEO or embracing innovative approaches like GEO. We’re here to help your institution succeed in today’s ever-changing digital world.