Category: AI

  • Higher education needs a plan in place for student “pastoral” use of AI

    Higher education needs a plan in place for student “pastoral” use of AI

    With 18 per cent of students reporting mental health difficulties, a figure which has tripled in just seven years, universities are navigating a crisis.

    The student experience can compound many of the risk factors for poor mental health – from managing constrained budgets and navigating the cost of learning crisis, to moving away from established support systems, and balancing high-stakes assessment with course workload and part-time work.

    In response, universities provide a range of free support services, including counselling and wellbeing provision, alongside specialist mental health advisory services. But if we’re honest, these services are under strain. Despite rising expenditure, they’re still often under-resourced, overstretched, and unable to keep pace with growing demand. With staff-student ratios at impossible levels and wait times for therapeutic support often exceeding ten weeks, some students are turning to alternatives for more immediate care.

    And in this void, artificial intelligence is stepping in. While ChatGPT-written essays dominate the sector’s AI discussions, the rise of “pastoral AI” highlights a far more urgent and overlooked AI use case – with consequences more troubling than academic misconduct.

    Affective conversations

    For the uninitiated, the landscape of “affective” or “pastoral” AI is broad. Mainstream tools like Microsoft’s Copilot or OpenAI’s ChatGPT are designed for productivity, not emotional support. Yet research suggests that users increasingly turn to them for exactly that – seeking help with breakups, mental health advice, and other life challenges, as well as essay writing. While affective conversations may account for only a small proportion of overall use (under three per cent in some studies), the full picture is poorly understood.

    Then there are AI “companions” such as Replika or Character.AI – chatbots built specifically for affective use. These are optimised to listen, respond with empathy, offer intimacy, and provide virtual friendship, confidants, or even “therapy”.

    This is not a fringe phenomenon. Replika claims over 25 million users, while Snapchat’s My AI counts more than 150 million. The numbers are growing fast. As the affective capacity of these tools improves, they are becoming some of the most popular and intensively used forms of generative AI – and increasingly addictive.

    A recent report found that users spend an average of 86 minutes a day with AI companions – more than on Instagram or YouTube, and not far behind TikTok. These bots are designed to keep users engaged, often relying on sycophantic feedback loops that affirm worldviews regardless of truth or ethics. Because large language models are trained in part through human feedback, its output is often highly sycophantic – “agreeable” responses which are persuasive and pleasing – but these can become especially risky in emotionally charged conversations, especially with vulnerable users.

    Empathy optimisations

    For students already experiencing poor mental health, the risks are acute. Evidence is emerging that these engagement-at-all-costs chatbots rarely guide conversations to a natural resolution. Instead, their sycophancy can fuel delusions, amplify mania, or validate psychosis.

    Adding to these concerns, legal cases and investigative reporting are surfacing deeply troubling examples: chatbots encouraging violence, sending unsolicited sexual content, reinforcing delusional thinking, or nudging users to buy them virtual gifts. One case alleged a chatbot encouraged a teenager to murder his parents after they restricted his screen time; another saw a chatbot advise a fictional recovering meth addict to take a “small hit” after a bad week. These are not outliers but the predictable by-products of systems optimised for empathy but unbound by ethics.

    And it’s young people who are engaging with them most. More than 70 per cent of companion app users are aged 18 to 35, and two-thirds of Character.AI’s users are 18 to 24 – the same demographic that makes up the majority of our student population.

    The potential harm here is not speculative. It is real and affecting students right now. Yet “pastoral” AI use remains almost entirely absent from higher education’s AI conversations. That is a mistake. With lawsuits now spotlighting cases of AI “encouraged” suicides among vulnerable young people – many of whom first encountered AI through academic use – the sector cannot afford to ignore this.

    Paint a clearer picture

    Understanding why students turn to AI for pastoral support might help. Reports highlight loneliness and vulnerability as key indicators. One found that 17 per cent of young people valued AI companions because they were “always available,” while 12 per cent said they appreciated being able to share things they could not tell friends or family. Another reported that 12 per cent of young people were using chatbots because they had no one else to talk to – a figure that rose to 23 per cent among vulnerable young people, who were also more likely to use AI for emotional support or therapy.

    We talk often about belonging as the cornerstone of student success and wellbeing – with reducing loneliness a key measure of institutional effectiveness. Pastoral AI use suggests policymakers may have much to learn from this agenda. More thinking is needed to understand why the lure of an always-available, non-judgemental digital “companion” feels so powerful to our students – and what that tells us about our existing support.

    Yet AI discussions in higher education remain narrowly focused, on academic integrity and essay writing. Our evidence base reflects this: the Student Generative AI Survey – arguably the best sector-wide tool we have – gives little attention to pastoral or wellbeing-related uses. The result is, however, that data remains fragmented and anecdotal on this area of significant risk. Without a fuller sector-specific understanding of student pastoral AI use, we risk stalling progress on developing effective, sector-wide strategies.

    This means institutions need to start a different kind of AI conversation – one grounded in ethics, wellbeing, and emotional care. It will require drawing on different expertise: not just academics and technologists, but also counsellors, student services staff, pastoral advisers, and mental health professionals. These are the people best placed to understand how AI is reshaping the emotional lives of our students.

    Any serious AI strategy must recognise that students are turning to these tools not just for essays, but for comfort and belonging too, and we must offer something better in return.

    If some of our students find it easier to confide in chatbots than in people, we need to confront what that says about the accessibility and design of our existing support systems, and how we might improve and resource them. Building a pastoral AI strategy is less about finding a perfect solution, but more about treating pastoral AI seriously, as a mirror which reflects back at us student loneliness, vulnerabilities, and institutional support gaps. These reflections should push us to re-centre these experiences, to reimagine our pastoral support provision, into an image that’s genuinely and unapologetically human.

    Source link

  • How UNE trained an AI-literate workforce – Campus Review

    How UNE trained an AI-literate workforce – Campus Review

    Almost all employees at the University of New England (UNE) use AI each day to augment tasks, despite the wider sector slowly adopting the tech into its workforce.

    Please login below to view content or subscribe now.

    Membership Login

    Source link

  • Generic AI cannot capture higher education’s unwritten rules

    Generic AI cannot capture higher education’s unwritten rules

    Some years ago, I came across Walter Moberley’s The Crisis in the University. In the years after the Second World War, universities faced a perfect storm: financial strain, shifting student demographics, and a society wrestling with lost values. Every generation has its reckoning. Universities don’t just mirror the societies they serve – they help define what those societies might become.

    Today’s crisis looks very different. It isn’t about reconstruction or mass expansion. It’s about knowledge itself – how it is mediated and shaped in a world of artificial intelligence. The question is whether universities can hold on to their cultural distinctiveness once LLM-enabled workflows start to drive their daily operations.

    The unwritten rules

    Let’s be clear: universities are complicated beasts. Policies, frameworks and benchmarks provide a skeleton. But the flesh and blood of higher education live elsewhere – in the unwritten rules of culture.

    Anyone who has sat through a validation panel, squinted at the spreadsheets for a TEF submission, or tried to navigate an approval workflow knows what I mean. Institutions don’t just run on paperwork; they run on tacit understandings, corridor conversations and half-spoken agreements.

    These practices rarely make it into a handbook – nor should they – but they shape everything from governance to the student experience. And here’s the rub: large language models, however clever, can’t see what isn’t codified. Which means they can’t capture the very rules that make one university distinctive from another.

    The limits of generic AI

    AI is already embedded in the sector. We see it in student support chatbots, plagiarism detection, learning platforms, and back-office systems. But these tools are built on vast, generic datasets. They flatten nuance, reproduce bias and assume a one-size-fits-all worldview.

    Drop them straight into higher education and the risk is obvious: universities start to look interchangeable. An algorithm might churn out a compliant REF impact statement. But it won’t explain why Institution A counts one case study as transformative while Institution B insists on another, or why quality assurance at one university winds its way through a labyrinth of committees while at another it barely leaves the Dean’s desk. This isn’t just a technical glitch. It’s a governance risk. Allow external platforms to hard-code the rules of engagement and higher education loses more than efficiency – it loses identity, and with it agency.

    The temptation to automate is real. Universities are drowning in compliance. Office for Students returns, REF, KEF and TEF submissions, equality reporting, Freedom of Information requests, the Race Equality Charter, endless templates – the bureaucracy multiplies every year.

    Staff are exhausted. Worse, these demands eat into time meant for teaching, research and supporting students. Ministers talk about “cutting red tape,” but in practice the load only increases. Automation looks like salvation. Drafting policies, preparing reports, filling forms – AI can do all this faster and more cheaply.

    But higher education isn’t just about efficiency. It’s also about identity and purpose. If efficiency is pursued at the expense of culture, universities risk hollowing out the very things that make them distinctive.

    Institutional memory matters

    Universities are among the UK’s most enduring civic institutions, each with a long memory shaped by place. A faculty’s interpretation of QAA benchmarks, the way a board debates grade boundaries, the precedents that guide how policies are applied – all of this is institutional knowledge.

    Very little of it is codified. Sit in a Senate meeting or a Council away-day and you quickly see how much depends on inherited understanding. When senior staff leave or processes shift, that memory can vanish – which is why universities so often feel like they are reinventing the wheel.

    Here, human-assistive AI could play a role. Not by replacing people, but by capturing and transmitting tacit practices alongside the formal rulebook. Done well, that kind of LLM could preserve memory without erasing culture.

    So, what does “different” look like? The Turing Institute recently urged the academy to think about AI in relation to the humanities, not just engineering. My own experiments – from the Bernie Grant Archive LLM to a Business Case LLM and a Curriculum Innovation LLM – point in the same direction.

    The principles are clear. Systems should be co-designed with staff, reflecting how people actually work rather than imposing abstract process maps. They must be assistive, not directive – capable of producing drafts and suggestions but always requiring human oversight.

    They need to embed cultural nuance: keeping tone, tradition and tacit practice alive alongside compliance. That way outputs reflect the character of the institution, reinforcing its USP rather than erasing it. They should preserve institutional knowledge by drawing on archives and precedents to create a living record of decision-making. And they must build in error prevention, using human feedback loops to catch hallucinations and conceptual drift.

    Done this way, AI lightens the bureaucratic load without stripping out the culture and identity that make universities what they are.

    The sector’s inflection point

    So back to the existential question. It’s not whether to adopt AI – that ship has already sailed. The real issue is whether universities will let generic platforms reshape them in their image, or whether the sector can design tools that reflect its own values.

    And the timing matters. We’re heading into a decade of constrained funding, student number caps, and rising ministerial scrutiny. Decisions about AI won’t just be about efficiency – they will go to the heart of what kind of universities survive and thrive in this environment.

    If institutions want to preserve their distinctiveness, they cannot outsource AI wholesale. They must build and shape models that reflect their own ways of working – and collaborate across the sector to do so. Otherwise, the invisible knowledge that makes one university different from another will be drained away by automation.

    That means getting specific. Is AI in higher education infrastructure, pedagogy, or governance? How do we balance efficiency with the preservation of tacit knowledge? Who owns institutional memory once it’s embedded in AI – the supplier, or the university? Caveat emptor matters here. And what happens if we automate quality assurance without accounting for cultural nuance?

    These aren’t questions that can be answered in a single policy cycle. But they can’t be ducked either. The design choices being made now will shape not just efficiency, but the very fabric of universities for decades to come.

    The zeitgeist of responsibility

    Every wave of technology promises efficiency. Few pay attention to culture. Unless the sector intervenes, large language models will be no different.

    This is, in short, a moment of responsibility. Universities can co-design AI that reflects their values, reduces bureaucracy and preserves identity. Or they can sit back and watch as generic platforms erode the lifeblood of the sector, automating away the subtle rules that make higher education what it is.

    In 1989, at the start of my BBC career, I stood on the Berlin Wall and watched the world change before my eyes. Today, higher education faces a moment of similar magnitude. The choice is stark: be shapers and leaders, or followers and losers.

    Source link

  • We cannot address the AI challenge by acting as though assessment is a standalone activity

    We cannot address the AI challenge by acting as though assessment is a standalone activity

    How to design reliable, valid and fair assessment in an AI-infused world is one of those challenges that feels intractable.

    The scale and extent of the task, it seems, outstrips the available resource to deal with it. In these circumstances it is always worth stepping back to re-frame, perhaps reconceptualise, what the problem is, exactly. Is our framing too narrow? Have we succeeded (yet) in perceiving the most salient aspects of it?

    As an educational development professional, seeking to support institutional policy and learning and teaching practices, I’ve been part of numerous discussions within and beyond my institution. At first, we framed the problem as a threat to the integrity of universities’ power to reliably and fairly award degrees and to certify levels of competence. How do we safeguard this authority and credibly certify learning when the evidence we collect of the learning having taken place can be mimicked so easily? And the act is so undetectable to boot?

    Seen this way the challenge is insurmountable.

    But this framing positions students as devoid of ethical intent, love of learning for its own sake, or capacity for disciplined “digital professionalism”. It also absolves us of the responsibility of providing an education which results in these outcomes. What if we frame the problem instead as a challenge of AI to higher education practices as a whole and not just to assessment? We know the use of AI in HE ranges widely, but we are only just beginning to comprehend the extent to which it redraws the basis of our educative relationship with students.

    Rooted in subject knowledge

    I’m finding that some very old ideas about what constitutes teaching expertise and how students learn are illuminating: the very questions that expert teachers have always asked themselves are in fact newly pertinent as we (re)design education in an AI world. This challenge of AI is not as novel as it first appeared.

    Fundamentally, we are responsible for curriculum design which builds students’ ethical, intellectual and creative development over the course of a whole programme in ways that are relevant to society and future employment. Academic subject content knowledge is at the core of this endeavour and it is this which is the most unnerving part of the challenge presented by AI. I have lost count of the number of times colleagues have said, “I am an expert in [insert relevant subject area], I did not train for this” – where “this” is AI.

    The most resource-intensive need that we have is for an expansion of subject content knowledge: every academic who teaches now needs a subject content knowledge which encompasses a consideration of the interplay between their field of expertise and AI, and specifically the use of AI in learning and professional practice in their field.

    It is only on the basis of this enhanced subject content knowledge that we can then go on to ask: what preconceptions are my students bringing to this subject matter? What prior experience and views do they have about AI use? What precisely will be my educational purpose? How will students engage with this through a newly adjusted repertoire of curriculum and teaching strategies? The task of HE remains a matter of comprehending a new reality and then designing for the comprehension of others. Perhaps the difference now is that the journey of comprehension is even more collaborative and even less finite that it once would have seemed.

    Beyond futile gestures

    All this is not to say that the specific challenge of ensuring that assessment is valid disappears. A universal need for all learners is to develop a capacity for qualitative judgement and to learn to seek, interpret and critically respond to feedback about their own work. AI may well assist in some of these processes, but developing students’ agency, competence and ethical use of it is arguably a prerequisite. In response to this conundrum, some colleagues suggest a return to the in-person examination – even as a baseline to establish in a valid way levels of students’ understanding.

    Let’s leave aside for a moment the argument about the extent to which in-person exams were ever a valid way of assessing much of what we claimed. Rather than focusing on how we can verify students’ learning, let’s emphasise more strongly the need for students themselves to be in touch with the extent and depth of their own understanding, independently of AI.

    What if we reimagined the in-person high stakes summative examination as a low-stakes diagnostic event in which students test and re-test their understanding, capacity to articulate new concepts or design novel solutions? What if such events became periodic collaborative learning reviews? And yes, also a baseline, which assists us all – including students, who after all also have a vested interest – in ensuring that our assessments are valid.

    Treating the challenge of AI as though assessment stands alone from the rest of higher education is too narrow a frame – one that consigns us to a kind of futile authoritarianism which renders assessment practices performative and irrelevant to our and our students’ reality.

    There is much work to do in expanding subject content knowledge and in reimagining our curricula and reconfiguring assessment design at programme level such that it redraws our educative relationship with students. Assessment more than ever has to become a common endeavour rather than something we “provide” to students. A focus on how we conceptualise the trajectory of students’ intellectual, ethical and creative development is inescapable if we are serious about tackling this challenge in meaningful way.

    Source link

  • JCU vice-chancellor Simon Biggs – Campus Review

    JCU vice-chancellor Simon Biggs – Campus Review

    Vice-chancellor of James Cook University Simon Biggs said artificial intelligence is critical to help young people with companionship and loneliness.

    Please login below to view content or subscribe now.

    Membership Login

    Source link

  • AI-Fueled Fraud in Higher Education

    AI-Fueled Fraud in Higher Education

    Colleges across the United States are facing an alarming increase in “ghost students”—fraudulent applicants who infiltrate online enrollment systems, collect financial aid, and vanish before delivering any academic engagement. The problem, fueled by advances in artificial intelligence and weaknesses in identity verification processes, is undermining trust, misdirecting resources, and placing real students at risk.

    What Is a Ghost Student?

    A ghost student is not simply someone who drops out. These are fully fabricated identities—sometimes based on stolen personal information, sometimes entirely synthetic—created to fraudulently enroll in colleges. Fraudsters use AI tools to generate admissions essays, forge transcripts, and even produce deepfake images and videos for identity verification.

    Once enrolled, ghost students typically sign up for online courses, complete minimal coursework to stay active long enough to qualify for financial aid, and then disappear once funds are disbursed.

    Scope and Impact

    The scale of the problem is significant and growing:

    • California community colleges flagged approximately 460,000 suspicious applications in a single year—nearly 20% of the total—resulting in more than $11 million in fraudulent aid disbursements.

    • The College of Southern Nevada reported losing $7.4 million to ghost student fraud in one semester.

    • At Century College in Minnesota, instructors discovered that roughly 15% of students in a single course were fake enrollees.

    • California’s overall community college system reported over $13 million in financial aid losses in a single year due to such schemes—a 74% increase from the previous year.

    The consequences extend beyond financial loss. Course seats are blocked from legitimate students. Faculty spend hours identifying and reporting ghost students. Institutional data becomes unreliable. Most importantly, public trust in higher education systems is eroded.

    Why Now?

    Several developments have enabled this rise in fraud:

    1. The shift to online learning during the pandemic decreased opportunities for in-person identity verification.

    2. AI tools—such as large language models, AI voice generators, and synthetic video platforms—allow fraudsters to create highly convincing fake identities at scale.

    3. Open-access policies at many institutions, particularly community colleges, allow applications to be submitted with minimal verification.

    4. Budget cuts and staff shortages have left many colleges without the resources to identify and remove fake students in a timely manner.

    How Institutions Are Responding

    Colleges and universities are implementing multiple strategies to fight back:

    Identity Verification Tools

    Some institutions now require government-issued IDs matched with biometric verification—such as real-time selfies with liveness detection—to confirm applicants’ identities.

    Faculty-Led Screening

    Instructors are being encouraged to require early student engagement via Zoom, video introductions, or synchronous activities to confirm that enrolled students are real individuals.

    Policy and Federal Support

    The U.S. Department of Education will soon require live ID verification for flagged FAFSA applicants. Some states, such as California, are considering application fees or more robust identity checks at the enrollment stage.

    AI-Driven Pattern Detection

    Tools like LightLeap.AI and ID.me are helping institutions track unusual behaviors such as duplicate IP addresses, linguistic patterns, and inconsistent documentation to detect fraud attempts.

    Recommendations for HEIs

    To mitigate the risk of ghost student infiltration, higher education institutions should:

    • Implement digital identity verification systems before enrollment or aid disbursement.

    • Train faculty and staff to recognize and report suspicious activity early in the semester.

    • Deploy AI tools to detect patterns in application and login data.

    • Foster collaboration across institutions to share data on emerging fraud trends.

    • Communicate transparently with students about new verification procedures and the reasons behind them.

    Why It Matters

    Ghost student fraud is more than a financial threat—it is a systemic risk to educational access, operational efficiency, and institutional credibility. With AI-enabled fraud growing in sophistication, higher education must act decisively to safeguard the integrity of enrollment, instruction, and student support systems.


    Sources

    Source link

  • Do we still value original thought?

    Do we still value original thought?

    I have written the piece that you are now reading. But in the world of AI, what exactly does it mean to say that I’ve written it? 

    As someone who has either written or edited millions of words in my life, this question seems very important. 

    There are plenty of AI aids available to help me in my task. In fact, some are insinuating themselves into our everyday work without our explicit consent. For example, Microsoft inserted a ‘Copilot’ into Word, the programme I’m using. But I have disabled it. 

    I could also insert prompts into a service such as ChatGPT and ask it to write the piece itself. Or I could ask the chatbot direct questions and paste in the answers. Everybody who first encounters these services is amazed by what they can do. The ability to synthesise facts, arguments and ideas and express them in a desired style is truly extraordinary. So it’s possible that using chatbots would make my article more readable, or accurate or interesting.

    But in all these cases, I would be using, or perhaps paraphrasing, text that had been generated by a computer. And in my opinion, this would mean that I could no longer say that I had written it. And if that were the case, what would be the point of ‘writing’ the article and putting my name on it?

    Artificial intelligence is a real asset.

    There is no doubt that we benefit from AI, whether it is in faster access to information and services, safer transport, easier navigation, diagnostics and so on. 

    Rather than a revolution, the ever-increasing automation of human tasks seems a natural extension of the expansion of computing power that has been under way since the Second World War. Computers crunch data, find patterns and generate results that simulate those patterns. In general, this saves time and effort and enhances our lives.

    So at what point does the use of AI become worrying? To me, the answer is in the generation of content that purports to be created by specific humans but is in fact not. 

    The world of education is grappling with this issue. AI gathers information, orders and analyses it, and is able to answer questions about it, whether in papers or other ways. In other words, all the tasks that a student is supposed to perform! 

    At the simplest level, students can ask a computer to do the work and submit it as their own. Schools and universities have means to detect this, but there are also ways to avoid detection. 

    The human touch

    From my limited knowledge, text produced with the help of AI can seem sterile, distanced from both the ‘writer’ and the topic. In a word, dehumanised. And this is not surprising, because it is written by a robot. How is a teacher to grade a paper that seems to have been produced in this way?

    There is no point in moralising about this. The technologies cannot be un-invented. In fact, tech companies are investing hundreds of billions of dollars in vast amounts of additional computing power that will make robots ever more present in our lives. 

    So schools and universities will have to adjust. Some of the university websites that I’ve looked at are struggling to produce straightforward, coherent guidance for students. 

    The aim must be, on the one hand, to enable students to use all the available technologies to do their research, whether the goal is to write a first-year paper or a PhD thesis, and on the other hand to use their own brains to absorb and order their research, and to express their own analysis of it. They need to be able to think for themselves. 

    Methods to prove that they can do this might be to have hand-written exams, or to test them in viva voce interviews. Clearly, these would work for many students and many subjects, but not for all. On the assumption that all students are going to use AI for some of their tasks, the onus is on educational establishments to find new ways to make sure that students can absorb information and express their analysis on their own.

    Can bots break a news story?

    If schools and universities can’t do that, there would be no point in going to university at all. Obtaining a degree would have no meaning and people would be emerging from education without having learned how to use their brains.

    Another controversial area is my own former profession, journalism. Computers have subsumed many of the crafts that used to be involved in creating a newspaper. They can make the layouts, customise outputs, match images to content, and so on. 

    But only a human can spot what might be a hot political story, or describe the situation on the ground in Ukraine.  

    Journalists are right to be using AI for many purposes, for example to discover stories by analysing large sets of data. Meanwhile, more menial jobs involving statistics, such as writing up companies’ financial results and reporting on sports events, could be delegated to computers. But these stories might be boring and could miss newsworthy aspects, as well as the context and the atmosphere. Plus, does anybody actually want to read a story written by a robot? 

    Just like universities, serious media organisations are busy evolving AI policies so as to maintain a competitive edge and inform and entertain their target audiences, while ensuring credibility and transparency. This is all the more important when the dissemination of lies and fake images is so easy and prevalent. 

    Can AI replace an Ai Weiwei? 

    The creative arts are also vulnerable to AI-assisted abuse. It’s so easy to steal someone’s music, films, videos, books, indeed all types of creative content. Artists are right to appeal for legal protection. But effective regulation is going to be difficult.  

    There are good reasons, however, for people to regulate themselves. Yes, AI’s potential uses are amazing, even frightening. But it gets its material from trawling every possible type of content that it can via the internet. 

    That content is, by definition, second hand. The result of AI’s trawling of the internet is like a giant bowl of mush. Dip your spoon into it, and it will still be other people’s mush. 

    If you want to do something original, use your own brain to do it. If you don’t use your own intelligence and your own capabilities, they will wither away.

    And so I have done that. This piece may not be brilliant. But I wrote it.


     

    Questions to consider:

    1. If artificial intelligence writes a story or creates a piece of art, can that be considered original?

    2. How can journalists use artificial intelligence to better serve the public?

    3. In what ways to you think artificial intelligence is more helpful or harmful to professions like journalism and the arts?


     

    Source link

  • Universities need to reckon with how AI is being used in professional practice

    Universities need to reckon with how AI is being used in professional practice

    One of the significant themes in higher education over the last couple of decades has been employability – preparing students for the world of work into which they will be released on graduation.

    And one of the key contemporary issues for the sector is the attempt to come to grips with the changes to education in an AI-(dis)empowered world.

    The next focus, I would argue, will involve a combination of the two – are universities (and regulators) ready to prepare students for the AI-equipped work where they will be working?

    The robotics of law

    Large, international law firms have been using AI alongside humans for some time, and there are examples of its use for the drafting of non-disclosure agreements and contracts, for example.

    In April 2025, the Solicitors Regulation Authority authorised Garfield Law, a small firm specialising in small-claims debt recovery. This was remarkable only in that Garfield Law is the first law firm in the world to deliver services entirely through artificial intelligence.

    Though small and specialised, the approval of Garfield Law was a significant milestone – and a moment of reckoning – for both the legal professional and legal education. If a law firm can be a law firm without humans, what is the future for legal education?

    Indeed, I would argue that the HE sector as a whole is largely unprepared for a near-future in which the efficient application of professional knowledge is no longer the sole purview of humans.

    Professional subjects such as law, medicine, engineering and accountancy have tended to think of themselves as relatively “technology-proof” – where technology was broadly regarded as useful, rather than a usurper. Master of the Rolls Richard Vos said in March that AI tools

    may be scary for lawyers, but they will not actually replace them, in my view at least… Persuading people to accept legal advice is a peculiarly human activity.

    The success or otherwise of Garfield Law will show how the public react, and whether Vos is correct. This vision of these subjects as high-skill, human-centric domains needing empathy, judgement, ethics and reasoning is not the bastion it once was.

    In the same speech, Vos also said that, in terms of using AI in dispute resolution, “I remember, even a year ago, I was frightened even to suggest such things, but now they are commonplace ideas”. Such is the pace at which AI is developing.

    Generative AI tools can, and are, being used in contract drafting, judgement summaries, case law identification, medical scanning, operations, market analysis, and a raft of other activities. Garfield Law represents a world view where routine, and once billable, tasks performed by trainees and paralegals will most likely be automated. AI is challenging the traditional boundaries of what it means to be a professional and, in concert with this, challenging conceptions of what it is to teach, assess and accredit future professionals.

    Feeling absorbed

    Across the HE sector, the first reaction to the emergence of generative AI was largely (and predictably) defensive. Dire warnings to students (and colleagues) about “cheating” and using generative AI inappropriately were followed by hastily-constructed policies and guidelines, and the unironic and ineffective deployment of AI-powered AI detectors.

    The hole in the dyke duly plugged, the sector then set about wondering what to do next about this new threat. “Assessments” came the cry, “we must make them AI-proof. Back to the exam hall!”

    Notwithstanding my personal pedagogic aversion to closed-book, memory-recall examinations, such a move was only ever going to be a stopgap. There is a deeper pedagogic issue in learning and teaching: we focus on students’ absorption, recall and application of information – which, to be frank, is instantly available via AI. Admittedly, it has been instantly available since the arrival of the Internet, but we’ve largely been pretending it hasn’t for three decades.

    A significant amount of traditional legal education focuses on black-letter law, case law, analysis and doctrinal reasoning. There are AI tools which can already do this and provide “reasonably accurate legal advice” (Vos again), so the question arises as to what is our end goal in preparing students? The answer, surely, is skills – critical judgement, contextual understanding, creative problem solving and ethical reasoning – areas where (for the moment, at least) AI still struggles.

    Fit for purpose

    And yet, and yet. In professional courses like law, we still very often design courses around subject knowledge, and often try to “embed” the skills elements afterwards. We too often resort to tried and tested assessments which reward memory (closed-book exams), formulaic answers (problem questions) and performance under time pressure (time constrained assessments). These are the very areas in which AI performs well, and increasingly is able to match, or out-perform humans.

    At the heart of educating students to enter professional jobs there is an inherent conflict. On the one hand, we are preparing students for careers which either do not yet exist, or may be fundamentally changed – or displaced – by AI. On the other, the regulatory bodies are often still locked into twentieth century assumptions about demonstrating competence.

    Take the Solicitors Qualifying Examination (SQE), for example. Relatively recently introduced, the SQE was intended to bring consistency and accessibility into the legal profession. The assessment is nonetheless still based on multiple choice questions and unseen problem questions – areas where AI can outperform many students. There are already tools out there to help SQE student practice (Chat SQE, Kinnu Law), though no AI tool has yet completed the SQE itself. But in the USA, the American Uniform Bar Exam was passed by GPT4 in 2023, outperforming some human candidates.

    If a chatbot can ace your professional qualifying exam, is that exam fit for purpose? In other disciplines, the same question arises. Should medical students be assessed on their recall of rare diseases? Should business students be tested on their SWOT analyses? Should accounting students analyse corporate accounts? Should engineers calculate stress tolerances manually? All of these things can be completed by AI.

    Moonshots

    Regulatory bodies, universities and employers need to come together more than ever to seriously engage with what AI competency might look like – both in the workplace and the lecture theatre. Taking the approach of some regulators and insisting on in-person exams to prepare students for an industry entirely lacking in exams probably is not it. What does it mean to be an ethical, educated and adaptable professional in the age of AI?

    The HE sector urgently needs to move beyond discussions about whether or not students should be allowed to use AI. It is here, it is getting more powerful, and it is never leaving. Instead, we need to focus on how we assess in a world where AI is always on tap. If we cannot tell the difference between AI-generated work and student-generated work (and increasingly we cannot) then we need to shift our focus towards the process of learning rather than the outputs. Many institutions have made strides in this direction, using reflective journals, project-based learning and assessments which reward students for their ability to question, think, explain and justify their answers.

    This is likely to mean increased emphasis on live assessments – advocacy, negotiations, client interviews or real-world clinical experience. In other disciplines too, simulations, inter- and multi-disciplinary challenges, or industry-related authentic assessments. These are nothing revolutionary, they are pedagogically sound and all have been successfully implemented. They do, however, demand more of us as academics. More time, more support, more creativity. Scaling up from smaller modules to large cohorts is not an easy feat. It is much easier to keep doubling-down on what we already do, and hiding behind regulatory frameworks. However, we need to do these things (to quote JFK)

    not because they are easy, but because they are hard, because that goal will serve to organize and measure the best of our energies and skills, because that challenge is one that we are willing to accept, one we are unwilling to postpone.

    In law schools, how many of us teach students how to use legal technology, how to understand algorithmic biases, or how to critically assess AI-generated legal advice? How many business schools teach students how to work alongside AI? How many medical schools give students the opportunity to learn how to critically interpret AI-generated diagnostics? The concept of “digital professionalism” – the ability to effectively and ethically use AI in a professional setting – is becoming a core graduate-level skill.

    If universities fail to take the lead on this, then private providers will be eager, and quick, to fill the void. We already have short courses, boot camps, and employer-led schemes which offer industry-tailored AI literacy programmes – and if universities start to look outdated and slow to adapt, students will vote with their feet.

    Invention and reinvention

    However, AI is not necessarily the enemy. Like all technological advances it is essentially nothing more than a tool. As with all tools – the stone axe, the printing press, the internet – it brings with it threats to some and opportunities for others. We have identified some of the threats but also the opportunities that (with proper use), AI can bring – enhanced learning, deeper engagement, and democratisation of access to knowledge. Like the printing press, the real threat faced by HE is not the tool, but a failure to adapt to it. Nonetheless, a surprising number of academics are dusting off their metaphorical sabots to try and stop the development of AI.

    We should be working with the relevant sector and regulator and asking ourselves how we can adapt our courses and use AI to support, rather than substitute, genuine learning. We have an opportunity to teach students how to move away from being consumers of AI outputs, and how to become critical users, questioners and collaborators. We need to stop being reactive to AI – after all, it is developing faster than we can ever do.

    Instead, we need to move towards reinvention. This could mean: embedding AI literacy in all disciplines; refocusing assessments to require more creative, empathetic, adaptable and ethical skills; preparing students and staff to work alongside AI, not to fear it; and closer collaboration with professional regulators.

    AI is being used in many professions, and the use will inevitably grow significantly over the next few years. Educators, regulators and employers need to work even more closely together to prepare students for this new world. Garfield Law is (currently) a one-off, and while it might be tempting to dismiss the development as tokenistic gimmickry, it is more than that.

    Professional courses are standing on the top of a diving board. We can choose obsolescence and climb back down, clinging to outdated practices and condemn ourselves to irrelevance. Or, we can choose opportunity and dive in to a more dynamic, responsive and human vision of professional learning.

    We just have to be brave enough to take the plunge.

    Source link