A growing share of colleges and universities are embedding artificial intelligence tools and AI literacy into the curriculum with the intent of aiding student success. A 2025 Inside Higher Ed survey of college provosts found that nearly 30 percent of respondents have reviewed curriculum to ensure that it will prepare students for AI in the workplace, and an additional 63 percent say they have plans to review curriculum for this purpose.
In the latest episode of Voices of Student Success, host Ashley Mowreader speaks with Shlomo Argamon, associate provost for artificial intelligence at Touro, to discuss the university policy for AI in the classroom, the need for faculty and staff development around AI, and the risks of gamification of education.
An edited version of the podcast appears below.
Q: How are you all at Touro thinking about AI? Where is AI integrated into your campus?
Shlomo Argamon, associate provost for artificial intelligence at Touro University
A: When we talk about the campus of Touro, we actually have 18 or 19 different campuses around the country and a couple even internationally. So we’re a very large and very diverse organization, which does affect how we think about AI and how we think about issues of the governance and development of our programs.
That said, we think about AI primarily as a new kind of interactive technology, which is best seen as assistive to human endeavors. We want to teach our students both how to use AI effectively in what they do, how to understand and properly mitigate and deal with the risks of using AI improperly, but above all, to always think about AI in a human context.
When we think about integrating AI for projects, initiatives, organizations, what have you, we need to first think about the human processes that are going to be supported by AI and then how AI can best support those processes while mitigating the inevitable risks. That’s really our guiding philosophy, and that’s true in all the ways we’re teaching students about AI, whether we’re teaching students specifically, deeply technical [subjects], preparing them for AI-centric careers or preparing them to use AI in whatever other careers they may pursue.
Q: When it comes to teaching about AI, what is the commitment you all make to students? Is it something you see as a competency that all students need to gain or something that is decided by the faculty?
A: We are implementing a combination—a top-down and a bottom-up approach.
One thing that is very clear is that every discipline, and in fact, every course and faculty member, will have different needs and different constraints, as well as competencies around AI that are relevant to that particular field, to that particular topic. We also believe there’s nobody that knows the right way to teach about AI, or to implement AI, or to develop AI competencies in your students.
We need to encourage and incentivize all our faculty to be as creative as possible in thinking about the right ways to teach their students about AI, how to use it, how not to use it, etc.
So No. 1 is, we’re encouraging all of our faculty at all levels to be thinking and developing their own ideas about how to do this. That said, we also believe very firmly that all students, all of our graduates, need to have certain fundamental competencies in the area of AI. And the way that we’re doing this is by integrating AI throughout our general education curriculum for undergraduates.
Ultimately, we believe that most, if not all, of our general education courses will include some sort of module about AI, teaching students specifically about the AI-relevant competencies that are relevant to those particular topics that they’re learning, whether it’s writing, reading skills, presentations, math, science, history, the different kinds of cognition and skills that you learn in different fields. What are the AI competencies that are relevant to that, and to have them learning that.
So No. 1, they’re learning it not all at once. And also, very importantly, it’s not isolated from the topics, from the disciplines that they’re learning, but it’s integrated within them so that they see it as … part of writing is knowing how to use AI in writing and also knowing how not to. Part of learning history is knowing how to use AI for historical research and reasoning and knowing how not to use it, etc. So we’re integrating that within our general education curriculum.
Beyond that, we also have specific courses in various AI skills, both at the undergraduate [and] at the graduate level, many of which are designed for nontechnical students to help them learn the skills that they need.
Q: Because Touro is such a large university and it’s got graduate programs, online programs, undergraduate programs, I was really surprised that there is an institutional AI policy.
A lot of colleges and universities have really grappled with, how do we institutionalize our approach to AI? And some leaders have kind of opted out of the conversation and said, “We’re going to leave it to the faculty.” I wonder if we could talk about the AI policy development and what role you played in that process, and how that’s the overarching, guiding vision when it comes to thinking about students using and engaging with AI?
A: That’s a question that we have struggled with, as all academic leaders, as you mentioned, struggle with this very question.
Our approach is to create policy at the institutional level that provides only the necessary guardrails and guidance that then enables each of our schools, departments and individual faculty members to implement the correct solutions for them in their particular areas, within this guidance and these guardrails so that it’s done safely and so that we know that it’s going, over all, in a positive and also institutionally consistent direction to some extent.
In addition, one of the main functions of my office is to provide support to the schools, departments and especially the faculty members to make this transition and to develop what they need.
It’s an enormous burden on faculty members to shift, not just to add AI content to their classes, if they do so, but to shift the way that we teach, the way that we do assessments. The way that we relate to our students, even, has to shift, to change, and it creates a burden on them.
It’s a process to develop resources, to develop ways of doing this. I and the people that work in our office, we have regular office hours to talk to faculty, to work with them. One of the most important things that we do, and we spend a lot of time and effort on this, is training for our faculty, for our staff on AI, on using AI, on teaching about AI, on the risks of AI, on mitigating those risks, how to think about AI—all of these things. It all comes down to making sure that our faculty and staff, they are the university, and they’re the ones who are going to make all of this a success, and it’s up to us to give them the tools that they need to make this a success.
I would say that while in many questions, there are no right or wrong answers, there are different perspectives and different opinions. I think that there is one right answer to “What does a university need to do institutionally to ensure success at dealing with the challenge of AI?” It’s to support and train the faculty and staff, who are the ones who are going to make whatever the university does a success or a failure.
Q: Speaking of faculty, there was a university faculty innovation grant program that sponsored faculty to take on projects using AI in the classroom. Can you talk a little bit about that and how that’s been working on campus?
A: We have an external donor who donated funds so that we were able to award nearly 100 faculty innovation challenge grants for developing methods of integrating AI into teaching.
Faculty members applied and did development work over the summer, and they’re now implementing in their fall courses right now. We’re right now going through the initial set of faculty reports on their projects, and we have projects from all over the university in all different disciplines and many different approaches to looking at how to use AI.
At the beginning of next spring, we’re going to have a conference workshop to bring everybody together so we can share all of the different ways that people try to do this. Some experiments, I’m sure, will not have worked, but that’s also incredibly important information, because what we’re seeking to do [is], we’re seeking to help our students, but we’re also seeking to learn what works, what doesn’t work and how to move forward.
Again, this goes back to our philosophy that we want to unleash the expertise, intelligence, creativity of our faculty—not top down to say, “We have an AI initiatives. This is what you need to be doing”—but, instead, “Here’s something new. We’ll give you the tools, we’ll give you the support. We’ll give you the funding to make something happen, make interesting things happen, make good things for your students happen, and then let’s talk about it and see how it worked, and keep learning and keep growing.”
Q: I was looking at the list of faculty innovation grants, and I saw that there were a few other simulations. There was one for educators helping with classroom simulations. There was one with patient interactions for medical training. It seems like there’s a lot of different AI simulations happening in different courses. I wonder if we can talk about the use of AI for experiential learning and why that’s such a benefit to students.
A: Ever since there’s been education, there’s been this kind of distinction between book learning and real-world learning, experiential learning and so forth. There have always been those who have questioned the value of a college education because you’re just learning what’s in the books and you don’t really know how things really work, and that criticism has some validity.
But what we’re trying to do and what AI allows us to do [is], it allows us and our students to have more and more varied experiences of the kinds of things they’re trying to learn and to practice what they’re doing, and then to get feedback on a much broader level than we could do before. Certainly, whenever you had a course in say, public speaking, students would get up, do some public speaking, get feedback and proceed. Now with AI, students can practice in their dorm rooms over and over and over again and get direct feedback; that feedback and those experiences can be made available then to the faculty member, who can then give the students more direct and more human or concentrated or expert feedback on their performance based on this, and it just scales.
In the medical field, this is where it’s hugely, hugely important. There’s a long-standing institution in medical education called the standardized patient. Traditionally it’s a human actor who learns to act as a patient, and they’re given the profile of what disorders they’re supposed to have and how they’re supposed to act, and then students can practice, whether they’re diagnostic skills, whether they’re questions of student care and bedside manner, and then get expert feedback.
We now have, to a large extent, AI systems that can do this, whether it’s interactive in a text-based simulation, voice-based simulation. We also have robotic mannequins that the students can work with that are AI-powered with AI doing conversation. Then they can be doing physical exams on the mannequins that are simulating different kinds of conditions, and again, this gives the possibility of really just scaling up this kind of experiential learning. Another kind of AI that has been found useful in a number of our programs, particularly in our business program, are AI systems that watch people give presentations and can give you real-time feedback, and that works quite well.
Q: These are interesting initiatives, because it cuts out the middleman of needing a third party or maybe a peer to help the student practice the experience. But in some ways, does it gamify it too much? Is it too much like video games for students? How have you found that these are realistic enough to prepare students?
A: That is indeed a risk, and one that we need to watch. As in nearly everything that we’re doing, there are risks that need to be managed and cannot be solved. We need to be constantly alert and watching for these risks and ensuring that we don’t overstep one boundary or another.
When you talk about the gamification, or the video game nature of this, the artificial nature of it, there are really two pieces to it. One piece is the fact that there is no mannequin that exists, at least today, that can really simulate what it’s like to examine a human being and how the human being might react.
AI chatbots, as good as they are, will not now and in the near, foreseeable future, at least, be able to simulate human interactions quite accurately. So there’s always going to be a gap. What we need to do, as with other kinds of education, you read a book, the book is not going to be perfect. Your understanding of the book is not going to be perfect. There has to be an iterative process of learning. We have to have more realistic simulations, different kinds of simulations, so the students can, in a sense, mentally triangulate their different experiences to learn to do things better. That’s one piece of it.
The other piece, when you say gamification, there’s the risk that it turns into “I’m trying to do something to stimulate getting the reward or the response here or there.” And there’s a small but, I think, growing research literature on gamification of education, where if you gamify a little bit too much, it becomes more like a slot machine, and you’re learning to maneuver the machine to give you the dopamine hits or whatever, rather than really learning the content of what you’re doing. The only solution to that is for us to always be aware of what we’re doing and how it’s affecting our students and to adjust what we’re doing to avoid this risk.
This goes back to one of the key points: Our whole philosophy of this is to always look at the technology and the tools, whether AI or anything else, as embedded within a larger human context. The key here is understanding when we implement some educational experience for students, whether it involves AI or technology or not, it’s always creating incentives for the students to behave in a certain way. What are those incentives, and are those incentives aligned with the educational objectives that we have for the students? That’s the question that we always need to be asking ourselves and also observing, because with AI, we don’t entirely know what those incentives are until we see what happens. So we’re constantly learning and trying to figure this out as we go.
If I could just comment on that peer-to-peer simulation: Medical students poking each other or social work students interviewing each other for a social work kind of exam has another important learning component, because the student that is being operated upon is learning what it’s like to be in the other shoes, what it’s like to be the patient, what it’s like to be the object of investigation by the professional. And empathy is an incredibly important thing, and understanding what it’s like for them helps the students to learn, if done properly, to do it better and to have the appropriate sort of relationship with their patients.
Q: You also mentioned these simulations give the faculty insight into how the student is performing. I wonder if we can talk about that; how is that real-time feedback helpful, not only for the student but for the professor?
A: Now, one thing that needs to be said is that it’s very difficult, often, to understand where all of your students are in the learning process, what specifically they need. We can be deluged by data, if we so choose, that may confuse more than enlighten.
That said, the data that come out of these systems can definitely be quite useful. One example is there are some writing assistance programs, Grammarly and their ilk, that can provide the exact provenance of writing assignments to the faculty, so it can show the faculty exactly how something was composed. Which parts did they write first? Which parts did they write second? Maybe they outlined it, then they revised this and they changed this, and then they cut and pasted it from somewhere else and then edited.
All of those kinds of things that gives the faculty member much more detailed information about the student’s process, which can enable the faculty to give the students much more precise and useful feedback on their own learning. What do they perhaps need to be doing differently? What are they doing well? And so forth. Because then you’re not just looking at a final paper or even at a couple of drafts and trying to infer what the student was doing so that you can give them feedback, but you can actually see that more or less in real time.
That’s the sort of thing where the data can be very useful. And again, I apologize if I sound like a broken record. It all goes back to the human aspect of this, and to use data that helps the faculty member to see the individual student with their own individual ways of thinking, ways of behaving, ways of incorporating knowledge, to be able to relate to them more as an individual.
Briefly and parenthetically, one of the great hopes that we have for integrating AI into the educational process is that AI can help to take away many of the bureaucratic and other burdens that faculty are burdened with, and free them and enable them in different ways to enhance their human relationship with their students, so that we can get back to the core of education. Which really, I believe, is the transfer of knowledge and understanding through a human relationship between teacher and student.
It’s not what might be termed the “jug metaphor” for education, where I, the faculty member, have a jug full of knowledge, and I’m going to pour it into your brain, but rather, I’m going to develop a relationship with you, and through this relationship, you are going to be transformed, in some sense.
Q: This could be a whole other podcast topic, but I want to touch on this briefly. There is a risk sometimes when students are using AI-powered tools and faculty are using AI-powered tools that it is the AI engaging with itself and not necessarily the faculty with the students. When you talk about allowing AI to lift administrative burdens or ensure that faculty can connect with students, how can we make sure that it’s not robot to robot but really person to person?
A: That’s a huge and a very important topic, and one which I wish that I had a straightforward and direct and simple answer for. This is one of those risks that has to be mitigated and managed actively and continually.
One of the things that we emphasize in all our trainings for faculty and staff and all our educational modules for students about AI is the importance of the AI assisting you, rather than you assisting the AI. If the AI produces some content for you, it has to be within a process in which you’re not just reviewing it for correctness, but you’re producing the content where it’s helping you to do so in some sense.
That’s a little bit vague, because it plays out differently in different situations, and that’s the case for faculty members who are producing a syllabus or using AI to produce other content for the courses to make sure that it’s content that they are producing with AI. Same thing for the students using AI.
For example, our institutional AI policy having to do with academic honesty and integrity, is, I believe, groundbreaking in the sense that our default policy for courses that don’t have a specific policy regarding the use of AI in that course—by next spring, all courses must have a specific policy—is that AI is allowed to be used by students for a very wide variety of tasks on their assignments.
You can’t use AI to simply do your assignment for you. That is forbidden. The key is the work has to be the work of the student, but AI can be used to assist. Through establishing this as a default policy—which faculty, department chairs, deans have wide latitude to define more or less restrictive policies with specific carve-outs, simply because every field is different and the needs are different—the default and the basic attitude is, AI is a tool. You need to learn to use it well and responsibly, whatever you do.
Q: I wanted to talk about the future of AI at the university. Are there any new initiatives you should tell our listeners about? How are you all thinking about continuing to develop AI as a teaching and learning tool?
A: It’s hard for me to talk about specific initiatives, because what we’re doing is we believe that it’s AI within higher education particularly, but I think in general as well, it’s fundamentally a start-up economy in the sense that nobody, and I mean nobody, knows what to do with it, how to deal with it, how does it work? How does it not work?
Therefore, our attitude is that we want to have it run as many experiments as we can, to try as many different things as we can, different ways of teaching students, different ways of using AI to teach. Whether it’s through simulations, content creation, some sort of AI teaching assistants working with faculty members, whether it’s faculty members coming up with very creative assignments for students that enable them to learn the subject matter more deeply by AI assisting them to do very difficult tasks, perhaps, or tasks that require great creativity, or something like that.
The sky is the limit, and we want all of our faculty to experiment and develop. We’re seeking to create that within the institution. Touro is a wonderful institution for that, because we already have the basic institutional culture for this, to have an entrepreneurial culture within the university. So the university as a whole is an entrepreneurial ecosystem for experimenting and developing ways of teaching about and with and through AI.
In a letter to Secretary of Homeland Security Kristi Noem, higher ed institutions say maintaining a consistent flow of international faculty and staff members is critical.
Jabin Botsford/The Washington Post/Getty Images
A number of higher education institutions and the associations that represent them are asking to be exempted from the new $100,000 H-1B visa application fee, saying the prohibitive cost could be detrimental to the recruitment and retention of international faculty, researchers and staff members.
In a letter to the Department of Homeland Security last week, the American Council on Education argued that such individuals “contribute to groundbreaking research, provide medical services to underserved and vulnerable populations … and enable language study, all of which are vital to U.S. national interests.” Without them, ACE and 31 co-signers said, key jobs in high-demand sectors such as health care, information technology, education and finance will likely go unfilled.
The letter came just days after U.S. Citizenship and Immigration Services launched a new online payment website and provided an updated statement on policies surrounding the fee. UCIS clarified that the fee will apply to any new H-1B petitions filed on or after Sept. 21, and it must be paid before the petition is filed.
The update also referenced possible “exception[s] from the fee” but said those exceptions would only be granted in an “extraordinarily rare circumstance where the Secretary has determined that a particular alien worker’s presence in the United States as an H-1B worker is in the national interest.”
ACE said that H-1B visa recipients in higher education certainly meet those standards, citing data from the College and University Professional Association for Human Resources that shows that over 70 percent of international employees at colleges and universities hold tenure-track or tenured positions. The top five disciplines they work in are business, engineering, health professions, computer science and physical sciences.
“H-1B visa holders working for institutions of higher education are doing work that is crucial to the U.S. economy and national security,” the letter reads.
Despite the clarification provided by UCIS, ACE still had several remaining questions about the fee. These included whether the $100,000 would be refunded if a petition was denied and whether individuals seeking a “change of status” from an H-1B to an F-1 or J-1 would still be required to pay the fee.
At least two lawsuits have been filed against DHS concerning these visa fees. Neither has been issued a ruling so far.
As with the prior column, this week’s thesis evolves out of the Zoom keynote to the Rethink AI Conference, sponsored in part by the International Academy of Science, Technology, Engineering and Management and hosted by the ICLED Business School in Lagos, Nigeria. Thanks again to the chair of the International Professors Project, Sriprya Sarathy, and the conference committee for making my presentation possible.
Virtually all aspects and positions at universities will be touched by the transformation. The changes will come more rapidly than many of us in higher education are accustomed to or with which we are comfortable. In large part, the speed will be demanded by employers of our learners and by competition among universities. Change will also strike directly at the nature of what and how we teach.
It is not that we have seen no change in teaching over the years. Notably, delivery systems, methods and modes of assessment, and related areas have been subject to significant changes. Anthony Piña, Illinois State University’s chief online learning officer, notes that online learners surpassed 50 percent in 2022 and continue to rise. However, deeper changes in the nature of what we teach have progressed as technology has influenced what employers are seeking.
Most Popular
Building knowledge has been the mantra in higher education for many centuries. The role of the university has been to build knowledge in learners to make them “knowledgeable.” Oxford Languages and Google define knowledge most concisely as “facts, information, and skills acquired by a person through experience or education; the theoretical or practical understanding of a subject.”
The emphasis on facts and information has taken a somewhat changed role with the advent of technologies over recent decades. Notably, the World Wide Web with the advent of the first browser, Mosaic, in 1993 provided instant access to unprecedented volumes of information. While familiarity with key facts and information remains paramount, the recall and synthesis of facts and information via the web can be performed nearly as quickly and more thoroughly than the human brain in most instances. In a sense the internet has become our extended, rapid-access, personal memory. Annual global web traffic exceeded a zetabyte for the first time in 2015. A zetabyte is 1,000 exabytes, one billion terabytes or one trillion gigabytes. This year, it’s expected to hit 175 ZB.
More recently, we have seen a surge in professional certificates offered by higher education. As Modern Campus reports,
“Every professional needs upskilling in order to maintain a competitive edge in the workforce. Keeping ahead of the latest skills and knowledge has become more crucial than ever in order to align with evolving market demands. Although traditional degree programs have long been the standard solution, certificate programs have gained popularity due to their ability to offer targeted, accelerated skill development.”
However, agentic AI is just now emerging. It is different than the prompt to answer generative AI in that agentic AI can include many workforce skills in its array of tools. In fact, working and collaborating with agentic AI will require an advanced, integrated skill set, as described by the Global Skills Development Council:
“In the fast-paced, digitally driven world, agentic AI is at the forefront of demanding new human competencies. While intelligent agents retain a place in daily life and work, individuals should transition to acquire agentic AI skills to thrive in the new age. These skills include, but are not limited to, working with technology, thinking critically, applying ethical reasoning, and adaptive collaboration with agentic AI systems. Such agentic AI skills empower one to consciously engage in guiding and shaping AI behaviors and outcomes rather than passively receiving and adapting to them. If one has agentic AI skills, they can successfully lead businesses, education, and creative industries in applying agents for innovation and impact. As such, re-dedicating ourselves to lifelong learning and responsible use of AI may prove vital in retaining humanity at the core of intelligent decision-making and progress. Without such competencies, professionals risk being bypassed by technologies they cannot control or understand. A passive attitude creates dependency on AI outcomes without the skill to query or improve them. Adopting agentic AI competencies equips individuals with the power to drive innovation and ensure responsible AI integration in the workplace.”
The higher-level skills humans will need as described by the Global Skills Development Council are different from many of the career-specific skills that universities now provide in short-form certificates and certification programs. Rather, I suggest that these broad, deep skills are ones that we might best describe as wisdom skills. They are not vocational but instead are deeper skills related to overall maturity and sophistication in leadership, vision and insight. They include thinking critically, thinking creatively, applying ethical reasoning and collaborating adaptively with both humans and agentic AI.
Agentic AI can be trained for the front-line skills of many positions. However, the deeper, more advanced and more cerebral skills that integrate human contexts and leadership vision are often reflective of what we would describe as wisdom rather than mere working skills. These, I would suggest, are the nature of what we will be called upon to emphasize in our classes, certificates and degrees.
Some of these skills and practices are currently taught at universities, often through case studies at the graduate level. Integrating them into the breadth of the degree curriculum as well as certificates may be a challenge, but it is one we must accomplish in higher education. Part of the process of fully embracing and integrating AI into our society will be for we humans to upgrade our own skills to maintain our relevance and leadership in the workplace.
Has your university begun to tackle the topics related to how the institution can best provide relevant skills in a world where embodied, agentic AI is working shoulder to shoulder with your graduates and certificate holders? How might you initiate discussion of such topics to ensure that the university continues to lead in a forward-thinking way?
Protests demanding divestment broke out across the country in the spring of 2024.
Lewis & Clark College has divested its endowment funds from all weapons manufacturers, making it one of few higher education institutions in the U.S. to do so, according to Oregon Public Broadcasting.
The new policy, approved by the private college’s Board of Trustees in mid-October, also requires the institution to publicly post at least once a year a list of the companies it invests in. The policy does not mention the war between Israel and Hamas, which sparked demand for such divestment in the first place.
For nearly two years, Lewis & Clark students have been calling on college leaders to withdraw any investments in weapons manufacturers or Israeli companies. But Paula Hayes, chair of the college’s Board of Trustees, said in a statement that the change had nothing to do with “any particular geopolitical situation or conflict.”
“Such considerations are inherently volatile, changing, and divisive, and contrary to the generally held view that the endowment should not be used to advocate specific positions on world affairs,” she said.
Students, on the other hand, called the decision a direct response to their demands.
“This is a functional divestment from genocide. The administration may attempt to depoliticize, but this is a political act,” Lewis & Clark student Sam Peak told OPB at a rally Oct. 22 celebrating the trustees’ decision. “This is a win for the boycott, divestment, sanctions movement and for solidarity with Palestine.”
A number of student groups across the country have made similar demands of their administrators, but few have succeeded. Among the institutions that have divested are the University of San Francisco and San Francisco State University. Others, including the University of Oregon, Oregon State University and Portland State University, have considered such an action but have yet to follow through.
UK universities are under mounting financial pressure. Join HEPI and King’s College London Policy Institute on 11 November 2025 at 1pm for a webinar on how universities balance relatively stable but underfunded income streams against higher-margin but volatile sources. Register now. We look forward to seeing you there.
The Prime Minister’s new target is for two-thirds of young people to participate in higher-level learning by age 25. This encompasses not only undergraduate degrees but also higher technical education and apprenticeships, all delivered under a single funding model for all Level 4-6 courses. Some have described this as England’s turn to tertiary, six years after the Augar Review called for a more ‘joined-up system’.
Since at least the 1990s, English post-secondary education has been characterised by market-based regulatory apparatus and fragmentation. Further education is associated with technical and vocational education, and training and entry to the labour market; higher education with professions, leadership, and research. Oversight of both is dispersed across multiple agencies and further disconnected from adult and lifelong learning. Critics have argued that, consequently, market logics have sustained wasteful competition and produced a homogenised system that privileges higher education over further education, to the detriment of equity and national skills needs.
If Augar exposed the limits of market-driven differentiation between further education and higher education, tertiary approaches in the devolved nations illustrate how greater collaboration and integrated oversight offer a potential corrective. Wales and Scotland have advanced considerably in a ‘tertiary’ direction and developed governance modes that exercise holistic stewardship over funding and quality regimes. They are justified on grounds of efficiency, concertedness, and the capacity to advance the common good. In Wales, Medr uses its statutory powers under the Well-being of Future Generations Act to guide institutions in meeting duties on equality, sustainability, and civic mission. In Scotland, the Scottish Funding Council leads the Outcome Agreement process, through which colleges and universities set out activities in return for funding. Even in England, partnerships at a regional level, such as those in the North East or through Institutes of Technology, aim to facilitate partnerships to align lifelong learning with local economic needs. In 2021, the last time a representative survey of the scale of collaboration took place, 80% of colleges and 50% of universities in the UK had formal programme links (and it is likely that collaboration has grown since then).
Despite this prevalence and enthusiasm, research on how the benefits arising from tertiary collaboration manifest at the level of institutions and students is limited. In a short exploratory study with the Edge Foundation, I examined one facet of tertiary systems in Scotland and England: the creation of formal student transition ‘pathways’ between colleges and universities. The aim was not a comprehensive survey, but to sample something of the nature of collaboration in existing systems, to gather evidence to think with and about the concept of tertiary and the place of collaboration and competition.
Collaboration as an adaptive strategy
Existing collaborations are, perhaps surprisingly, not foremost concerned with any given common good. Instead, collaboration often emerges as an adaptive strategy within conditions of resource scarcity. Local ‘natural alliances’ in shared specialisms, mutual values, and commitments to widening participation were important in establishing trust necessary to sustain joint work. Yet, as the study found, institutional precarity is the principal driver.
One Scottish interviewee put it plainly:
‘If I’m sitting there and I’ve got 500 applications, like 10 applications for any place, I’ve got good, strong applications. I’m not going to be going, right, how am I going to look at different ways to bring in students?’
Well-resourced institutions do not collaborate out of necessity: those under pressure do. Partnerships often take the form of a ‘grow your own’ recruitment pipeline, guaranteeing transitions between partner institutions. Universities could ask ‘some tough questions’ of colleges if progression was lower than anticipated. In some cases, institutions agree to partition markets to avoid directly competing for the same students.
Collaboration could also be used as an instrument of competition. In Scotland, articulation agreements (under which universities recognise vocational qualifications such as HNDs and HNCs and admit students with advanced standing) are commonplace. Colleges in this research reported ‘some bad behaviour’ where partner universities would use these agreements to siphon off students from colleges to secure enrolment numbers. This was contrary to the wishes of colleges, which argued that many such students might benefit from the more intimate and supportive college environment for an additional year, better preparing them to enter the more independent learning environment of university.
What collaboration offers students
Where collaboration was stable, tangible benefits followed for learners. Partnerships combine colleges’ attentive pedagogies and flexible resources with university accreditation and facilities. This enables smaller class sizes, greater pastoral attention, and sometimes improved retention and progression, particularly in educational cold spots. Colleges bring local specialisms and staff expertise, often linked to industry, which enrich university courses through co-design and joint delivery.
This lends cautious support to the claims of tertiary advocates: that collaboration can widen access and enhance provision. Yet formal, longitudinal evidence of graduate outcomes remains rare. The value of such partnerships, their distinctiveness, public benefit, and contribution to regional prosperity need to be more readily championed.
From expedient to strategic collaboration
As an instrument, collaboration is worth understanding. The capacity to facilitate collaboration as a strategic good is an important policy lever where market mechanisms are unable to respond immediately or efficiently to the imperatives of national need and public finance. The study suggests four priorities for policymakers:
Clarify national priorities and reform incentives
Collaboration has greater utility than an institutional survival tool. With the bringing together of funding for further education and higher education, there is an opportunity to create stability. Together with the clear articulation of long-term educational goals, strategic cooperation in pursuit of these ends could be sustained.
Strengthen regional governance
Where regional stewardship exists, through articulation hubs or in Scotland’s Outcome Agreements, collaboration is more systematic. England’s existing fragmented oversight and policy churn undermine this. Regional coherence enables institutions to collectively make strategic planning decisions.
Value colleges’ distinct niche
Colleges’ localism, technical capacity, and pedagogical expertise are distinctive assets. Policy should promote these specialisms and encourage co-design and co-delivery rather than hierarchical franchising. Partnerships should foreground each institution’s unique contribution, not replicate the same provision in different guises.
Improve data sharing and evaluation
The absence of mechanisms to track students’ journeys and long-term outcomes, including ‘distance travelled’ evaluations, makes claims about distinctiveness and public benefit harder to substantiate.
Tertiary turns in resource scarcity
Policy discourse has tended to over-dichotomise competition and collaboration. The question is: to what extent each strategy is most helpful for achieving agreed social ends. Where partnership is an appropriate mechanism, it requires a policy architecture with clarity of purpose and stability. To what ends collaboration is put to must be settled through democratic means – a more complicated question altogether.
Parents who grew up in the ’80s and ’90s know the feeling: you’re listening to your kid’s playlist, and suddenly a song hits you with a wave of uncanny familiarity. Despite the claims by your teen that it is the latest and greatest, you know that it is just a repackaging of one of your favorite tunes from the past. I am noticing a similar trend with generative AI. It is inherently regurgitative: reshaping and repackaging ideas and thoughts that are already out there.
Fears abound as to the future of higher education due to the rise of generative AI. Articles from professors in many different fields predict that AI is going to destroy the college essay or even eliminate the need for professors altogether. Their fears are well founded. Seeing the advances that generative AI has made in just the past few months, I am constantly teetering between immense admiration and abject terror. My Chatbot does everything for me, from scheduling how to get my revise and resubmit done in three months to planning my wardrobe for the fall semester. I fear becoming too self-reliant on it. Am I losing myself? Am I turning my ChatGPT into a psychological crutch? And if I am having these thoughts, what effect is generative AI having on my students?
Remix vs. Originality: Girl Talk or Beyonce
Grappling with the strengths and weaknesses of my own AI usage, I feel I have discovered what might be the saving grace of humanity (feel free to nominate me for the Nobel Peace Prize if you wish). As I hinted earlier, AI is more like a DJ remixing the greatest hits of society rather than an innovative game changer. My ChatGPT is more like Girl Talk (who you have probably never heard of. Just ask your AI) than Beyonce (who you most definitely have heard of). Not that there’s anything wrong with Girl Talk. Their mashups are amazing and require a special kind of talent. Just like navigating AI usage requires a certain balance of skills to create a usable final product. But no matter how many pieces of music from other artists you mash together, you will not eventually turn into a groundbreaking, innovative musician. Think Pat Boone vs. The Beatles, Sha Na Na vs. David Bowie, Milli Vanilli vs. Prince, MC Hammer vs. Lauryn Hill.
What AI Gets Wrong in Writing
As a mathematician and a novelist, I see this glaring weakness in both of these very different disciplines. I’ll start with writing. ChatGPT is especially helpful in coming up with strange character or planet names for my science fiction novels. It will also help me create a disease or something else I need to drive the plot further. And, of course, it can help me find an errant comma or fix a fragmented sentence. But that is about it. If I ask it to write an entire chapter, for example, it will come up with the most boring, derivative, and bland excuse for prose I have ever seen. It will attempt my humor but fail miserably. It sometimes makes my stomach turn, it’s so bad.
A study from the Wharton School found that ChatGPT reduces the diversity of ideas in a pool of ideas. Thus, it diminishes the diversity of the overall output, narrowing the scope of novel ideas. Beyond that, I find that when I use ChatGPT to brainstorm, I typically don’t use its suggestions. Those suggestions just spark new ideas and help me come up with something different and more me.
For example, I asked ChatGPT to write a joke for its bad brainstorming practice of using the same core ideas over and over again. It said:
Joke: That’s not brainstorming—it’s a lazy mime troupe echoing each other.
That’s lame. I would never say that. But another joke it gave me sparked the music sampling analogy I opened this article with.
In any case, because of generative AI’s inability to actually generate anything new, I have hope that the college essay, like the fiction novel, will not die. Over-reliance on AI may indeed debilitate the essay, perhaps causing it to go on life support forcing students and faculty to drag its lifeless body across the finish line of graduation. But there is still hope.
I remember one of my favorite English teachers in middle school required that we keep a journal. Each day she asked us to write something, anything in our journal, even if it was only a paragraph or just a sentence. Something about putting pen to paper sparked my creativity. It also sparked a lifelong notebook addiction. And even though I consider myself somewhat of a techie and a huge AI enthusiast, to this day I still use notebooks for the first draft of my novels.
It is clear to me that ChatGPT will never be able to write my novels in my voice. I don’t claim to be a great novelist. I just feel that some of my greatest work hasn’t been written yet. While ChatGPT may be able to write a poem about aardvarks in the style of Robert Frost or a ballad about Evariste Galois in the style of Carole King, it can’t write my next novel, because it doesn’t yet exist. And even when it tries to imitate my voice and my style, predicting what I will write next, it does a poor job.
The Research Paper Dilemma: AI vs. Process
A research paper is inherently different from a creative work of fiction, however. ChatGPT does do a pretty good job of gathering information on a topic from several sources and synthesizing it into a coherent paper. You just have to make sure to check for the errant hallucinated reference. And honestly, when are our students ever going to be asked to write a 15-page research paper on Chaucer without any resources? And if they are, ChatGPT can probably produce that product better than an undergraduate student can. But the process, I would argue, is more important than the final product.
In his Inside Higher Ed paper Writing the Research Paper Slowly, JT Torres recommends a scaffolding process to writing the research paper. This method focuses on the process of writing a paper, exploring and reading sources, taking notes, organizing those notes into a ‘scientific story’ and creating an outline. Teaching students the process of writing the paper instead of focusing on the end product results in students feeling more confident that they can not only complete the task required but transfer those skills to another subject. Recognizing these limitations pushed me to rethink how I design assignments.
Using AI in the Classroom
Knowing that generative AI can do somethings (but not all things) better than a human has made me a more intentional professor. Now when I create assignments, I think: Can ChatGPT do this better than an undergraduate student? If so, then what am I really trying to teach? Here are a few strategies I use:
Method #1: Assess Your Assessments with AI in Mind
When designing an assignment, ask yourself whether it is testing a skill that AI already performs well. If so, consider shifting your focus to why that skill matters, or how students can go beyond AI’s capabilities.
Method #2: Use AI Where It Adds Value – Remove It Where It Does Not
In some cases, it makes sense to integrate AI directly into the assignment (e.g., generating code, automating data analysis). In others, the objective may be to build a human-only skill like personal expression or creative voice. I decide case by case whether AI should be a part of the process or explicitly excluded.
Method #3: Clarify Whether You are Teaching Theory or Application
When I am teaching tests, I have to ask myself: Am I assessing whether students understand the theory behind the test or whether they can run one using software? If it’s the latter, using AI to generate code might be appropriate. But if it’s the former, I’ll require manual calculations or a written explanation.
Method #4: Add a Reflection to any AI Supported Assignment
Any assignment where they are allowed to use AI, they also have to write a reflection about how they used AI and whether it was helpful or not. This encourages metacognition and reduces overreliance.
Method #5: Require Students to Share Their Prompts and Revisions
Having students share the prompts they used in completing the assignment teaches them about transparency and the need for iteration in their interaction with an AI. Students should not just be cutting and pasting the first response from ChatGPT. They need to learn how to take a response, analyze it then refine their prompt to get a better result. This helps them develop prompt engineering skills and realize that ChatGPT is not just a magic answer machine.
AI and the Limits of Innovation in Research
What about academic research in general? How is AI helping or hindering? Given that generative AI merely remixes the greatest hits of human history rather than creating anything new, I think its role in academic research is limited. Academic breakthroughs start with unasked questions. Generative AI works within the confines of existing data. It can’t sense the frontier because it doesn’t know there is a frontier. It can’t sample past answers of a question that hasn’t been asked yet. About a year ago, I was trying to get my AI to write a section of code for my research and it kept failing. I spent a week trying to get it to do what I wanted. I realized it was having such a difficult time because I was asking it to do something that hadn’t been done before. Finally, I gave up and wrote the piece of code myself, and it only took me about half an hour. Sure, the coding capabilities have gotten better over the past year, but the core principle remains the same. AI still struggles to innovate. It can’t do what hasn’t already been done. Also because of ‘creative flattery’ it wants to make you happy so it will try to do what you tell it to do even if it can’t. The product will be super convincing, but it can still be wrong.
I recently asked AI to write a theoretical proof that shows polygonal numbers are Benford distributed (Spoiler: They are not). Then I had it help me write a convincing journal-ready article. The only problem is it also wrote me a theoretical proof that Polygonal numbers are NOT Benford distributed as well. I submitted the former to a leading mathematics journal to see what would happen. Guess what, they caught it. A human was able to detect the ‘AI Slop’. This shows me, that (1) there will always be a need for human gatekeepers and (2) ‘creative flattery’ is extremely dangerous in a research setting and confirms the need for human review. The chatbot tries too hard to please, thus reinforcing what the user already thinks even if that means proving or disproving the exact same thing. Academic research thrives on novel questions and unpredictable answers, which AI is incapable of doing since it inherently just regurgitates what is already out there.
Helping Students See AI’s Blind Spots
The Benford Polygonal Numbers experiment is an important example of how we need to educate our students about AI usage in an academic setting. The Time.com article Why A.I. is Getting Less Reliable, Not More states that despite its progress over the years, AI can still resemble sophisticated misinformation machines. Students need to know how to navigate this.
One of my favorite assignments in my Statistics course is what I call:
Method #6: “Beat ChatGPT” – A Concept Mastery Challenge
Students must craft a statistics question that the chatbot gets wrong, explain why the chatbot got it wrong and then provide the correct answer. A tweak of this activity would be to take AI generated content and human written then compare and critique tone, clarity, or originality.
Remixing isn’t Creation
AI-generated content is like a song built entirely from remixed samples. Sampling has its place in music (and in writing) but when everything starts to sound the same, our ears and brains begin to tune out. A great remix can breathe new life into a classic, but we still crave the shock of the new. This is why people lost their minds the first time they heard Beyonce’s Lemonade or Kendrick Lamar’s To Pimp a Butterfly – not because they followed a formula, but because they bent the rules and made something we’d never heard before. AI, for all its value, doesn’t break the rules. It follows them. That is the difference between innovation and imitation. It is also the reason why AI, in its current capacity, will not kill original thought.
Sybil Prince Nelson, PhD, is an assistant professor of mathematics and data science at Washington and Lee University, where she also serves as the institution’s inaugural AI Fellow. She holds a PhD in Biostatistics and has over two decades of teaching experience at both the high school and college levels. She is also a published fiction author under the names Sybil Nelson and Leslie DuBois.
Meincke, Lea, Gideon Nave, and Christian Terwiesch. 2025. “ChatGPT Decreases Idea Diversity in Brainstorming.” Nature Human Behaviour 9: 1107–1109. https://doi.org/10.1038/s41562-025-02173-x.
I’ve been doing some work with the University of London on the past, present, and future of university federations.
I’ve looked at well over 60 kinds of different kinds of university partnerships, alliances, and coalitions, and the idea of a university federation avoids an easy definition. Crudely, it is a group of universities working together to achieve a shared goal but lots of kinds of partnerships would fall in and out of that definition. The University of London is the obvious example – it has seventeen independent members and it defines its mission as expanding access to higher education. Globally, the vast majority of other kinds of federated models do not work like this.
Whose federation is it anyway?
The University of Oxford describes its 36 colleges as operating within a “federal system” which are “independent and self-governing.” It seems odd to suggest a federation within an institution can exist (albeit the legal forms here complicate things) but federations are about the distribution of resources as much as regulatory structures.
On this basis the University of the Arts London would also qualify as a kind of federation. The colleges maintain their own identity with their own expertise and reputation. Their work is framed about the idea of six colleges with one university. Similarly, the University of California has a single legal identity but with nine campuses. They are one institution with a single leadership but diverse enough to operate across different geographies, programmes, and sub-identities.
There is perhaps then a difference between working in a federal way and being federated. This definition would encompass coalitions of universities working toward a single goal with some shared resources like The N8 research partnership. It would also include the University of the Arctic which is an almost entirely federal institution where its direction, governance, and activities, are directed by the shared agreement of its members.
Scales
Governance forms and organisational function are often but not always linked. The University of London’s membership has a formal governance responsibility to direct its activity while the University of London maintains its own strong central purpose and activities. The University of the Highlands and Islands (UHI) is potentially both more centralised and devolved than the University of London. Its degree awarding powers are centrally held by the university but delivery of programmes, in both FE and HE occurs over 70 learning centres. Additionally, the Post-16 Education (Scotland) Act 2013 identifies UHI as a regional strategic body with responsibilities for planning, delivery, monitoring, and efficiency savings in further education across its operating area.
At the slightly less federated end there is somewhere like the University Arts Singapore (UAS) which emerged as an alliance between LASALLE College of the Arts (LASALLE) and Nanyang Academy of Fine Arts (NAFA). UAS has a vice chancellor, each member has its own president (who are the deputy vice chancellors of UAS), and they lean into both their shared capacity and individual identity. As they state:
As an alliance, UAS has the unique advantage of leveraging the strengths of both our founding members, LASALLE and NAFA, while allowing each to remain distinct colleges. UAS will work in close collaboration with the two arts institutions to lead and provide strategic direction, and will validate, confer and award UAS degrees offered by both arts institutions.
There are lots of other examples including Paris Sciences et Lettres University which is a single institution with eleven constituent schools (some of which are several hundred years old.) To the Canadian model where the likes of the University of Toronto hold three religious independent institutions within their group where they share resources and maintain their own identities.
Models
The strictest definition of federation involves a legal form – but there is much in-between. A federation may be a shared brand, an informal network, a federated project with individual or shared ownership, a national or regional mission with shared funds, shared infrastructure with formal governance relationships, a group of universities with a single degree awarder, a coalition of providers with a shared and funded purpose, or an entirely devolved body that only exists through dint of the activities of its members.
If a federation has lots of different forms it by extension has a lot of different purposes. Ideally, the form of the federation should follow the agreed purpose if it is to be successful. The strategic vision has to be big enough to make the difficult compromises that come with working together make sense. Cost-saving is unlikely to be big enough to motivate all the pieces within a federated ecosystem but improving international standing, delivering better teaching, and funding research more effectively, supported by the efficient allocation of resources, might be.
Across federations there is often legislation and regulation that enables the constituent organisations to work together. This was the case with UAS, UHI has a long history of partnerships, funding, and regulation, while there is underpinning legislation in France to encourage the geographic coordination of research assets. It is noticeable that while the OfS has welcomed the idea of closing working together by institutions there isn’t actually a legislative or regulatory underpinning to make that easier.
Success
If a federation has a clear purpose and an accommodating regulatory environment it may have a reasonable chance of success. This still isn’t enough to wish one into being because of the operational complexity that can underpin such arrangements. Strategically, this includes whether it is more efficient, effective, or clear, to have a single governance, quality, and approval regime, whether resources are best shared or kept local, and whether staff should be separate or together. Again, much of this depends on federal form but sharing infrastructure between institutions even within federations is not that common. The sharing of resources should be the second order concern after the purpose of doing so but the practicalities can be complex, expensive, and absorb much organisational attention.
It is therefore difficult to define success but it is possible to improve the chances of federations being successful. Federations should begin with a clear purpose, then look at how the strategic sharing of assets can achieve that purpose, and then work to the practicalities of sharing those assets. A federation is about purpose, governance, finance, and brand, but it is also about creating an ecosystem where partners believe the shared negotiation of purpose, strategy, and execution, is more powerful than a single organisation doing this alone. A federation is about giving something up, whether that is some identities or some resources, in the shared belief the collective gain will outweigh any individual loss.
If federations are to become more of a feature of the higher education landscape the largest challenges may not be structural but cultural. Recent reforms of higher education in England were largely about greater competition between providers. A federation is to acknowledge that agglomeration benefits may be achieved through cooperation, consolidation, and the strategic deprioritisation of some work where others may have greater expertise.
The central plank of the government’s recent white paper is that the homogeneity of the sector is an impediment to the efficient allocation of resources. If it is serious about specialisation, particularly within specific geographies, it should open up more routes to federal structures and the strategic benefits they may bring.
James Coe is chairing a panel on federations at The Festival of Higher Education with the University of London. Tickets can be purchased here.
Sensemaking is an essential part of one’s personal knowledge mastery, so vital that it ought to be a regular practice for any human, particularly those who desire to be taken seriously and be able to add value in workplaces, communities, and societies. Sensemaking centers on a desire to solve problems and gets fueled by curiosity.
Jarche shares that there’s a whole spectrum of potential sensemaking approaches, everything from filtering information (making a list), or contributing to new information (writing a thesis). Sensemaking requires practice and vulnerability. We aren’t always going to get things right the first time we come to a conclusion.
Half-Baked Ideas
In introducing the idea of “half-baked ideas,” Jarche writes:
If you don’t make sense of the world for yourself, then you’re stuck with someone else’s world view.
As I reflect on my own ability to come up with half-baked ideas, it all depends on how controversial whatever idea I might be having at the time is as to whether I’m inclined to share it in a social space. I find myself thinking about what hashtags or even words might attract people looking for an internet fight, or wanting to troll a stranger.
If a half-baked idea I might share is related to teaching and learning, I am less concerned about who may desire to publicly disagree with something, but it it is about politics, I just don’t see the value in “thinking aloud,” in relation to what internet riff raff may decide to come at me, metaphorically speaking. Part of that is that I’m not an expert, while another aspect of this resistance is that I would rather do this kind of sensemaking offline. This is at least in terms of me trying out ideas about various policies, political candidates, and issues of the day.
Committing to Practice
I just launched a sensemaking practice involving books about teaching and learning. Usually, I read upwards of 95% of the authors’ books that I interview for the Teaching in Higher Ed podcast. However, I would like to both find other ways to surface my own learning from all that reading, along with cultivating a set of skills to get better at video.
The series is called Between the Lines: Books that Shape Teaching and Learning and I anticipate eventually getting up to producing an average of one video per week. I won’t hold myself to quite as high of expectations as I do for the podcast, since for that, I’ve been going strong, airing a podcast every single week since June 2014 and I don’t want to have that kind of self-imposed pressure for this experimentation, skill-building, and sensemaking practice.
If we cast our minds back to 2005, the four UK higher education funding bodies ran the first ever compulsory survey of students’ views on the education they receive – the National Student Survey (NSS).
Back then the very idea of a survey was controversial, we were worried about the impact on the sector reputation, the potential for response bias, and that students would be fearful of responding negatively in case their university downgraded their degree.
Initial safeguards
These fears led us to make three important decisions all of which are now well past their sell-by date. These were:
Setting a response rate threshold of 50 per cent
Restricting publication to subject areas with more than 22 respondents
Only providing aggregate data to universities.
At the time all of these were very sensible decisions designed to build confidence in what was a controversial survey. Twenty years on, it’s time to look at these with fresh eyes to assure ourselves they remain appropriate – and to these eyes they need to change.
Embarrassment of riches
One of these rules has already changed: responses are now published where 10 or more students respond. Personally, I think this represents a very low bar, determined as it is by privacy more than statistical reasoning, but I can live with it especially as research has shown that “no data” can be viewed negatively.
Of the other two, first let me turn to the response rate. Fifty per cent is a very high response rate for any survey, and the fact the NSS achieves a 70 per cent response rate is astonishing. While I don’t think we should be aiming to get fewer responses, drawing a hard line at 50 per cent creates a cliff edge in data that we don’t need.
There is nothing magical about 50 per cent – it’s simply a number that sounds convincing because it means that at least half your students contributed. A 50 per cent response rate does not ensure that the results are not subject to bias for example, if propensity to respond was in some way correlated with a positive experience the results would still be flawed.
I would note that the limited evidence that there is suggests that propensity to respond is not correlated with a positive experience, but it’s an under-researched area and one the Office for Students (OfS) should publish some work on.
Panel beating
This cliff edge is even more problematic when the data is used in regulation, as the OfS proposes to do a part of the new TEF. Under OfS proposals providers that don’t have NSS data either due to small cohorts or a “low” response rate would have NSS evidence replaced with focus groups or other types of student interaction. This makes sense when the reason is an absolute low number of responses but not when it’s due to not hitting an exceptionally high response rate as Oxford and Cambridge failed to do for many years.
While focus groups can offer valuable insights, and usefully sit alongside large-scale survey work, it is utterly absurd to ignore evidence from a survey because an arbitrary and very high threshold is not met. Most universities will have several thousand final year students, so even if only 30 per cent of them respond you will have responses from hundreds if not thousands of individuals – which must provide a much stronger evidence base than some focus groups. Furthermore, that evidence base will be consistent with every other university creating one less headache for assessors in comparing diverse evidence.
The 50 per cent response rate threshold also looks irrational when set against a 30 per cent threshold for the Graduate Outcomes survey. While any response rate threshold is arbitrary to apply, applying two different thresholds needs rather more justification than the fact that the surveys are able to achieve different response rates. Indeed, I might argue that the risk of response bias might be higher with GO for a variety of reasons.
NSS to GO
In the absence of evidence in support of any different threshold I would align the NSS and GO publication thresholds at 30 per cent and make the response rates more prominent. I would also share NSS and GO data with TEF panels irrespective of the response rate, and allow them to rely on their expert judgement supported by the excellent analytical team at the OfS. And the TEF panel may then choose to seek additional evidence if they consider it necessary.
In terms of sharing data with providers, 2025 is really very different to 2005. Social media has arguably exploded and is now contracting, but in any case attitudes to sharing have changed and it is unlikely the concerns that existed in 2005 will be the same as the concerns of the current crop of students.
For those who don’t follow the detail, NSS data is provided back to Universities via a bespoke portal that provides a number of pre-defined cuts of the data and comments, together with an ability to create your own cross-tabs. This data, while very rich, do not have the analytical power of individualised data and suffer from still being subject to suppression for small numbers.
What this means is that if we want to understand the areas we want to improve we’re forced to deduce it from a partial picture rather than being laser focussed on exactly where the issues are, and this applies to both the Likert scale questions and the free text.
It also means that providers cannot form a longitudinal view of the student experience by linking to other data and survey responses they hold at an individual level – something that could generate a much richer understanding of how to improve the student experience.