Category: Generative AI

  • Regurgitative AI: Why ChatGPT Won’t Kill Original Thought – Faculty Focus

    Regurgitative AI: Why ChatGPT Won’t Kill Original Thought – Faculty Focus

    Source link

  • Preparing students for the world of work means embracing an AI-positive culture

    Preparing students for the world of work means embracing an AI-positive culture

    When ChatGPT was released in November 2022, it sent shockwaves through higher education.

    In response, universities moved at pace during the first half of 2023 to develop policy and good practice guidance for staff and students on appropriate use of GenAI for education purposes; the Russell Group’s Principles on the use of generative AI tools in education are particularly noteworthy. Developments since, however, have been fairly sluggish by comparison.

    The sector is still very much at an exploratory phase of development: funding pilots, individual staff using AI tools for formative learning and assessment, baseline studies of practice, student and staff support, understanding of tools’ functionality and utilisation etc. The result is a patchwork of practice not coherent strategy.

    Yet AI literacy is one of the fastest growing skills demanded by industry leaders. In a survey of 500 business leaders from organisations in the US and UK, over two-thirds respondents considered it essential for day-to-day work. Within AI literacy, demand for foundation skills such as understanding AI-related concepts, being able to prompt outputs and identify use cases surpassed demand for advanced skills such as developing AI systems.

    Students understand this too. In HEPI’s Student generative AI survey 2025 67 per cent of student respondents felt that it was essential to understand and use AI to be successful in the workplace whereas only 36 per cent felt they had received AI skill-specific support from their institution.

    There is a resulting gap between universities’ current support provision and the needs of industry/ business which presents a significant risk.

    Co-creation for AI literacy

    AI literacy for students includes defining AI literacy, designing courses aligned with identified learning outcomes, and assessment of those outcomes.

    The higher education sector has a good understanding of AI literacy at a cross disciplinary level articulated through several AI literacy frameworks. For example, UNESCO’s AI Competency Framework for Students or the Open University in the UK’s own framework. However, most universities have yet to articulate nuanced discipline-specific definitions of AI literacy beyond specialist AI-related subjects.

    Assessment and AI continues to be a critical challenge. Introducing AI tools in the classroom to enhance student learning and formatively assess students is fairly commonplace, however, summative assessment of students’ effective use of AI is much less so. Such “authentic assessments” are essential if we are serious about adequately preparing our students for the future world of work. Much of the negative discourse around AI in pedagogy has been around academic integrity and concerns that students’ critical thinking is being stifled. But there is a different way to think about generative AI.

    Co-creation between staff and students is a well-established principle for modern higher education pedagogy; there are benefits for both students and educators such as deeper engagement, shared sense of ownership and enhanced learning outcomes. Co-creation in the age of AI now involves three co-creators: students, educators and AI.

    Effective adoption and implementation of AI offers a range of benefits specific to students, specific to educators and a range of mutual benefits. For example, AI in conjunction with educators, offers the potential for significantly enhancing the personalisation of students’ experience on an on-demand basis regardless of the time of day. AI can also greatly assist with assessment processes such as marking turnaround times and enhanced consistency of feedback to students. AI also allows staff greater data-driven insights for example into students at risk of non-progression, areas where students performed well or struggled in assessments allowing targeted follow up support.

    There is a wealth of opportunity for innovation and scholarship as the potential of co-creation and quality enhancement involving staff, students and AI is in its infancy and technology continues to evolve at pace.

    Nurturing an AI-positive culture

    At Queen Mary University of London, we are funding various AI in education pilots, offering staff development programmes, student-led activities and through our new Centre for Excellence in AI Education, we are embedding AI meaningfully across disciplines. Successfully embedding AI within university policy and practice across the breadth of operations of the institution (education, research and professional practice), requires an AI-positive culture.

    Adoption of AI that aligns with the University’s values and strategy is key. It should be an enabler rather than some kind of add-on. Visible executive leadership for AI is critical, supported by effective use of existing champions within schools and faculties, professional services and the student body to harness expertise, provide support and build capacity. In some disciplines, our students may even be our leading institutional AI experts.

    Successful engagement and partnership working with industry, business and alumni is key to ensure our graduates continue to have the necessary skills, knowledge and AI literacy to achieve success in the developing workplace.

    There is no escaping the fact that embedding AI within all aspects of a university’s operations requires significant investment in terms of technology but also its people. In our experience, providing practical support through CPD, case studies, multimedia storytelling etc whilst ensuring space for debate are essential for a vibrant, evolving community of practice.

    A key challenge is trying to maintain oversight and co-ordinate activities in large complex institutions in a field that is evolving rapidly. Providing the necessary scaffolding in terms of strategy and policy, regulatory compliance and appropriate infrastructure whilst ensuring there is sufficient flexibility to allow agility and encourage innovation is another key factor for an AI-positive culture to thrive.

    AI is reshaping society and building an AI-positive culture is central to the future of higher education. Through strategic clarity and cultural readiness, universities need to effectively harness AI to enhance student learning, support staff, improve productivity and prepare students for a changing world.

    Source link

  • Higher education needs a plan in place for student “pastoral” use of AI

    Higher education needs a plan in place for student “pastoral” use of AI

    With 18 per cent of students reporting mental health difficulties, a figure which has tripled in just seven years, universities are navigating a crisis.

    The student experience can compound many of the risk factors for poor mental health – from managing constrained budgets and navigating the cost of learning crisis, to moving away from established support systems, and balancing high-stakes assessment with course workload and part-time work.

    In response, universities provide a range of free support services, including counselling and wellbeing provision, alongside specialist mental health advisory services. But if we’re honest, these services are under strain. Despite rising expenditure, they’re still often under-resourced, overstretched, and unable to keep pace with growing demand. With staff-student ratios at impossible levels and wait times for therapeutic support often exceeding ten weeks, some students are turning to alternatives for more immediate care.

    And in this void, artificial intelligence is stepping in. While ChatGPT-written essays dominate the sector’s AI discussions, the rise of “pastoral AI” highlights a far more urgent and overlooked AI use case – with consequences more troubling than academic misconduct.

    Affective conversations

    For the uninitiated, the landscape of “affective” or “pastoral” AI is broad. Mainstream tools like Microsoft’s Copilot or OpenAI’s ChatGPT are designed for productivity, not emotional support. Yet research suggests that users increasingly turn to them for exactly that – seeking help with breakups, mental health advice, and other life challenges, as well as essay writing. While affective conversations may account for only a small proportion of overall use (under three per cent in some studies), the full picture is poorly understood.

    Then there are AI “companions” such as Replika or Character.AI – chatbots built specifically for affective use. These are optimised to listen, respond with empathy, offer intimacy, and provide virtual friendship, confidants, or even “therapy”.

    This is not a fringe phenomenon. Replika claims over 25 million users, while Snapchat’s My AI counts more than 150 million. The numbers are growing fast. As the affective capacity of these tools improves, they are becoming some of the most popular and intensively used forms of generative AI – and increasingly addictive.

    A recent report found that users spend an average of 86 minutes a day with AI companions – more than on Instagram or YouTube, and not far behind TikTok. These bots are designed to keep users engaged, often relying on sycophantic feedback loops that affirm worldviews regardless of truth or ethics. Because large language models are trained in part through human feedback, its output is often highly sycophantic – “agreeable” responses which are persuasive and pleasing – but these can become especially risky in emotionally charged conversations, especially with vulnerable users.

    Empathy optimisations

    For students already experiencing poor mental health, the risks are acute. Evidence is emerging that these engagement-at-all-costs chatbots rarely guide conversations to a natural resolution. Instead, their sycophancy can fuel delusions, amplify mania, or validate psychosis.

    Adding to these concerns, legal cases and investigative reporting are surfacing deeply troubling examples: chatbots encouraging violence, sending unsolicited sexual content, reinforcing delusional thinking, or nudging users to buy them virtual gifts. One case alleged a chatbot encouraged a teenager to murder his parents after they restricted his screen time; another saw a chatbot advise a fictional recovering meth addict to take a “small hit” after a bad week. These are not outliers but the predictable by-products of systems optimised for empathy but unbound by ethics.

    And it’s young people who are engaging with them most. More than 70 per cent of companion app users are aged 18 to 35, and two-thirds of Character.AI’s users are 18 to 24 – the same demographic that makes up the majority of our student population.

    The potential harm here is not speculative. It is real and affecting students right now. Yet “pastoral” AI use remains almost entirely absent from higher education’s AI conversations. That is a mistake. With lawsuits now spotlighting cases of AI “encouraged” suicides among vulnerable young people – many of whom first encountered AI through academic use – the sector cannot afford to ignore this.

    Paint a clearer picture

    Understanding why students turn to AI for pastoral support might help. Reports highlight loneliness and vulnerability as key indicators. One found that 17 per cent of young people valued AI companions because they were “always available,” while 12 per cent said they appreciated being able to share things they could not tell friends or family. Another reported that 12 per cent of young people were using chatbots because they had no one else to talk to – a figure that rose to 23 per cent among vulnerable young people, who were also more likely to use AI for emotional support or therapy.

    We talk often about belonging as the cornerstone of student success and wellbeing – with reducing loneliness a key measure of institutional effectiveness. Pastoral AI use suggests policymakers may have much to learn from this agenda. More thinking is needed to understand why the lure of an always-available, non-judgemental digital “companion” feels so powerful to our students – and what that tells us about our existing support.

    Yet AI discussions in higher education remain narrowly focused, on academic integrity and essay writing. Our evidence base reflects this: the Student Generative AI Survey – arguably the best sector-wide tool we have – gives little attention to pastoral or wellbeing-related uses. The result is, however, that data remains fragmented and anecdotal on this area of significant risk. Without a fuller sector-specific understanding of student pastoral AI use, we risk stalling progress on developing effective, sector-wide strategies.

    This means institutions need to start a different kind of AI conversation – one grounded in ethics, wellbeing, and emotional care. It will require drawing on different expertise: not just academics and technologists, but also counsellors, student services staff, pastoral advisers, and mental health professionals. These are the people best placed to understand how AI is reshaping the emotional lives of our students.

    Any serious AI strategy must recognise that students are turning to these tools not just for essays, but for comfort and belonging too, and we must offer something better in return.

    If some of our students find it easier to confide in chatbots than in people, we need to confront what that says about the accessibility and design of our existing support systems, and how we might improve and resource them. Building a pastoral AI strategy is less about finding a perfect solution, but more about treating pastoral AI seriously, as a mirror which reflects back at us student loneliness, vulnerabilities, and institutional support gaps. These reflections should push us to re-centre these experiences, to reimagine our pastoral support provision, into an image that’s genuinely and unapologetically human.

    Source link

  • Generic AI cannot capture higher education’s unwritten rules

    Generic AI cannot capture higher education’s unwritten rules

    Some years ago, I came across Walter Moberley’s The Crisis in the University. In the years after the Second World War, universities faced a perfect storm: financial strain, shifting student demographics, and a society wrestling with lost values. Every generation has its reckoning. Universities don’t just mirror the societies they serve – they help define what those societies might become.

    Today’s crisis looks very different. It isn’t about reconstruction or mass expansion. It’s about knowledge itself – how it is mediated and shaped in a world of artificial intelligence. The question is whether universities can hold on to their cultural distinctiveness once LLM-enabled workflows start to drive their daily operations.

    The unwritten rules

    Let’s be clear: universities are complicated beasts. Policies, frameworks and benchmarks provide a skeleton. But the flesh and blood of higher education live elsewhere – in the unwritten rules of culture.

    Anyone who has sat through a validation panel, squinted at the spreadsheets for a TEF submission, or tried to navigate an approval workflow knows what I mean. Institutions don’t just run on paperwork; they run on tacit understandings, corridor conversations and half-spoken agreements.

    These practices rarely make it into a handbook – nor should they – but they shape everything from governance to the student experience. And here’s the rub: large language models, however clever, can’t see what isn’t codified. Which means they can’t capture the very rules that make one university distinctive from another.

    The limits of generic AI

    AI is already embedded in the sector. We see it in student support chatbots, plagiarism detection, learning platforms, and back-office systems. But these tools are built on vast, generic datasets. They flatten nuance, reproduce bias and assume a one-size-fits-all worldview.

    Drop them straight into higher education and the risk is obvious: universities start to look interchangeable. An algorithm might churn out a compliant REF impact statement. But it won’t explain why Institution A counts one case study as transformative while Institution B insists on another, or why quality assurance at one university winds its way through a labyrinth of committees while at another it barely leaves the Dean’s desk. This isn’t just a technical glitch. It’s a governance risk. Allow external platforms to hard-code the rules of engagement and higher education loses more than efficiency – it loses identity, and with it agency.

    The temptation to automate is real. Universities are drowning in compliance. Office for Students returns, REF, KEF and TEF submissions, equality reporting, Freedom of Information requests, the Race Equality Charter, endless templates – the bureaucracy multiplies every year.

    Staff are exhausted. Worse, these demands eat into time meant for teaching, research and supporting students. Ministers talk about “cutting red tape,” but in practice the load only increases. Automation looks like salvation. Drafting policies, preparing reports, filling forms – AI can do all this faster and more cheaply.

    But higher education isn’t just about efficiency. It’s also about identity and purpose. If efficiency is pursued at the expense of culture, universities risk hollowing out the very things that make them distinctive.

    Institutional memory matters

    Universities are among the UK’s most enduring civic institutions, each with a long memory shaped by place. A faculty’s interpretation of QAA benchmarks, the way a board debates grade boundaries, the precedents that guide how policies are applied – all of this is institutional knowledge.

    Very little of it is codified. Sit in a Senate meeting or a Council away-day and you quickly see how much depends on inherited understanding. When senior staff leave or processes shift, that memory can vanish – which is why universities so often feel like they are reinventing the wheel.

    Here, human-assistive AI could play a role. Not by replacing people, but by capturing and transmitting tacit practices alongside the formal rulebook. Done well, that kind of LLM could preserve memory without erasing culture.

    So, what does “different” look like? The Turing Institute recently urged the academy to think about AI in relation to the humanities, not just engineering. My own experiments – from the Bernie Grant Archive LLM to a Business Case LLM and a Curriculum Innovation LLM – point in the same direction.

    The principles are clear. Systems should be co-designed with staff, reflecting how people actually work rather than imposing abstract process maps. They must be assistive, not directive – capable of producing drafts and suggestions but always requiring human oversight.

    They need to embed cultural nuance: keeping tone, tradition and tacit practice alive alongside compliance. That way outputs reflect the character of the institution, reinforcing its USP rather than erasing it. They should preserve institutional knowledge by drawing on archives and precedents to create a living record of decision-making. And they must build in error prevention, using human feedback loops to catch hallucinations and conceptual drift.

    Done this way, AI lightens the bureaucratic load without stripping out the culture and identity that make universities what they are.

    The sector’s inflection point

    So back to the existential question. It’s not whether to adopt AI – that ship has already sailed. The real issue is whether universities will let generic platforms reshape them in their image, or whether the sector can design tools that reflect its own values.

    And the timing matters. We’re heading into a decade of constrained funding, student number caps, and rising ministerial scrutiny. Decisions about AI won’t just be about efficiency – they will go to the heart of what kind of universities survive and thrive in this environment.

    If institutions want to preserve their distinctiveness, they cannot outsource AI wholesale. They must build and shape models that reflect their own ways of working – and collaborate across the sector to do so. Otherwise, the invisible knowledge that makes one university different from another will be drained away by automation.

    That means getting specific. Is AI in higher education infrastructure, pedagogy, or governance? How do we balance efficiency with the preservation of tacit knowledge? Who owns institutional memory once it’s embedded in AI – the supplier, or the university? Caveat emptor matters here. And what happens if we automate quality assurance without accounting for cultural nuance?

    These aren’t questions that can be answered in a single policy cycle. But they can’t be ducked either. The design choices being made now will shape not just efficiency, but the very fabric of universities for decades to come.

    The zeitgeist of responsibility

    Every wave of technology promises efficiency. Few pay attention to culture. Unless the sector intervenes, large language models will be no different.

    This is, in short, a moment of responsibility. Universities can co-design AI that reflects their values, reduces bureaucracy and preserves identity. Or they can sit back and watch as generic platforms erode the lifeblood of the sector, automating away the subtle rules that make higher education what it is.

    In 1989, at the start of my BBC career, I stood on the Berlin Wall and watched the world change before my eyes. Today, higher education faces a moment of similar magnitude. The choice is stark: be shapers and leaders, or followers and losers.

    Source link

  • We cannot address the AI challenge by acting as though assessment is a standalone activity

    We cannot address the AI challenge by acting as though assessment is a standalone activity

    How to design reliable, valid and fair assessment in an AI-infused world is one of those challenges that feels intractable.

    The scale and extent of the task, it seems, outstrips the available resource to deal with it. In these circumstances it is always worth stepping back to re-frame, perhaps reconceptualise, what the problem is, exactly. Is our framing too narrow? Have we succeeded (yet) in perceiving the most salient aspects of it?

    As an educational development professional, seeking to support institutional policy and learning and teaching practices, I’ve been part of numerous discussions within and beyond my institution. At first, we framed the problem as a threat to the integrity of universities’ power to reliably and fairly award degrees and to certify levels of competence. How do we safeguard this authority and credibly certify learning when the evidence we collect of the learning having taken place can be mimicked so easily? And the act is so undetectable to boot?

    Seen this way the challenge is insurmountable.

    But this framing positions students as devoid of ethical intent, love of learning for its own sake, or capacity for disciplined “digital professionalism”. It also absolves us of the responsibility of providing an education which results in these outcomes. What if we frame the problem instead as a challenge of AI to higher education practices as a whole and not just to assessment? We know the use of AI in HE ranges widely, but we are only just beginning to comprehend the extent to which it redraws the basis of our educative relationship with students.

    Rooted in subject knowledge

    I’m finding that some very old ideas about what constitutes teaching expertise and how students learn are illuminating: the very questions that expert teachers have always asked themselves are in fact newly pertinent as we (re)design education in an AI world. This challenge of AI is not as novel as it first appeared.

    Fundamentally, we are responsible for curriculum design which builds students’ ethical, intellectual and creative development over the course of a whole programme in ways that are relevant to society and future employment. Academic subject content knowledge is at the core of this endeavour and it is this which is the most unnerving part of the challenge presented by AI. I have lost count of the number of times colleagues have said, “I am an expert in [insert relevant subject area], I did not train for this” – where “this” is AI.

    The most resource-intensive need that we have is for an expansion of subject content knowledge: every academic who teaches now needs a subject content knowledge which encompasses a consideration of the interplay between their field of expertise and AI, and specifically the use of AI in learning and professional practice in their field.

    It is only on the basis of this enhanced subject content knowledge that we can then go on to ask: what preconceptions are my students bringing to this subject matter? What prior experience and views do they have about AI use? What precisely will be my educational purpose? How will students engage with this through a newly adjusted repertoire of curriculum and teaching strategies? The task of HE remains a matter of comprehending a new reality and then designing for the comprehension of others. Perhaps the difference now is that the journey of comprehension is even more collaborative and even less finite that it once would have seemed.

    Beyond futile gestures

    All this is not to say that the specific challenge of ensuring that assessment is valid disappears. A universal need for all learners is to develop a capacity for qualitative judgement and to learn to seek, interpret and critically respond to feedback about their own work. AI may well assist in some of these processes, but developing students’ agency, competence and ethical use of it is arguably a prerequisite. In response to this conundrum, some colleagues suggest a return to the in-person examination – even as a baseline to establish in a valid way levels of students’ understanding.

    Let’s leave aside for a moment the argument about the extent to which in-person exams were ever a valid way of assessing much of what we claimed. Rather than focusing on how we can verify students’ learning, let’s emphasise more strongly the need for students themselves to be in touch with the extent and depth of their own understanding, independently of AI.

    What if we reimagined the in-person high stakes summative examination as a low-stakes diagnostic event in which students test and re-test their understanding, capacity to articulate new concepts or design novel solutions? What if such events became periodic collaborative learning reviews? And yes, also a baseline, which assists us all – including students, who after all also have a vested interest – in ensuring that our assessments are valid.

    Treating the challenge of AI as though assessment stands alone from the rest of higher education is too narrow a frame – one that consigns us to a kind of futile authoritarianism which renders assessment practices performative and irrelevant to our and our students’ reality.

    There is much work to do in expanding subject content knowledge and in reimagining our curricula and reconfiguring assessment design at programme level such that it redraws our educative relationship with students. Assessment more than ever has to become a common endeavour rather than something we “provide” to students. A focus on how we conceptualise the trajectory of students’ intellectual, ethical and creative development is inescapable if we are serious about tackling this challenge in meaningful way.

    Source link

  • Universities need to reckon with how AI is being used in professional practice

    Universities need to reckon with how AI is being used in professional practice

    One of the significant themes in higher education over the last couple of decades has been employability – preparing students for the world of work into which they will be released on graduation.

    And one of the key contemporary issues for the sector is the attempt to come to grips with the changes to education in an AI-(dis)empowered world.

    The next focus, I would argue, will involve a combination of the two – are universities (and regulators) ready to prepare students for the AI-equipped work where they will be working?

    The robotics of law

    Large, international law firms have been using AI alongside humans for some time, and there are examples of its use for the drafting of non-disclosure agreements and contracts, for example.

    In April 2025, the Solicitors Regulation Authority authorised Garfield Law, a small firm specialising in small-claims debt recovery. This was remarkable only in that Garfield Law is the first law firm in the world to deliver services entirely through artificial intelligence.

    Though small and specialised, the approval of Garfield Law was a significant milestone – and a moment of reckoning – for both the legal professional and legal education. If a law firm can be a law firm without humans, what is the future for legal education?

    Indeed, I would argue that the HE sector as a whole is largely unprepared for a near-future in which the efficient application of professional knowledge is no longer the sole purview of humans.

    Professional subjects such as law, medicine, engineering and accountancy have tended to think of themselves as relatively “technology-proof” – where technology was broadly regarded as useful, rather than a usurper. Master of the Rolls Richard Vos said in March that AI tools

    may be scary for lawyers, but they will not actually replace them, in my view at least… Persuading people to accept legal advice is a peculiarly human activity.

    The success or otherwise of Garfield Law will show how the public react, and whether Vos is correct. This vision of these subjects as high-skill, human-centric domains needing empathy, judgement, ethics and reasoning is not the bastion it once was.

    In the same speech, Vos also said that, in terms of using AI in dispute resolution, “I remember, even a year ago, I was frightened even to suggest such things, but now they are commonplace ideas”. Such is the pace at which AI is developing.

    Generative AI tools can, and are, being used in contract drafting, judgement summaries, case law identification, medical scanning, operations, market analysis, and a raft of other activities. Garfield Law represents a world view where routine, and once billable, tasks performed by trainees and paralegals will most likely be automated. AI is challenging the traditional boundaries of what it means to be a professional and, in concert with this, challenging conceptions of what it is to teach, assess and accredit future professionals.

    Feeling absorbed

    Across the HE sector, the first reaction to the emergence of generative AI was largely (and predictably) defensive. Dire warnings to students (and colleagues) about “cheating” and using generative AI inappropriately were followed by hastily-constructed policies and guidelines, and the unironic and ineffective deployment of AI-powered AI detectors.

    The hole in the dyke duly plugged, the sector then set about wondering what to do next about this new threat. “Assessments” came the cry, “we must make them AI-proof. Back to the exam hall!”

    Notwithstanding my personal pedagogic aversion to closed-book, memory-recall examinations, such a move was only ever going to be a stopgap. There is a deeper pedagogic issue in learning and teaching: we focus on students’ absorption, recall and application of information – which, to be frank, is instantly available via AI. Admittedly, it has been instantly available since the arrival of the Internet, but we’ve largely been pretending it hasn’t for three decades.

    A significant amount of traditional legal education focuses on black-letter law, case law, analysis and doctrinal reasoning. There are AI tools which can already do this and provide “reasonably accurate legal advice” (Vos again), so the question arises as to what is our end goal in preparing students? The answer, surely, is skills – critical judgement, contextual understanding, creative problem solving and ethical reasoning – areas where (for the moment, at least) AI still struggles.

    Fit for purpose

    And yet, and yet. In professional courses like law, we still very often design courses around subject knowledge, and often try to “embed” the skills elements afterwards. We too often resort to tried and tested assessments which reward memory (closed-book exams), formulaic answers (problem questions) and performance under time pressure (time constrained assessments). These are the very areas in which AI performs well, and increasingly is able to match, or out-perform humans.

    At the heart of educating students to enter professional jobs there is an inherent conflict. On the one hand, we are preparing students for careers which either do not yet exist, or may be fundamentally changed – or displaced – by AI. On the other, the regulatory bodies are often still locked into twentieth century assumptions about demonstrating competence.

    Take the Solicitors Qualifying Examination (SQE), for example. Relatively recently introduced, the SQE was intended to bring consistency and accessibility into the legal profession. The assessment is nonetheless still based on multiple choice questions and unseen problem questions – areas where AI can outperform many students. There are already tools out there to help SQE student practice (Chat SQE, Kinnu Law), though no AI tool has yet completed the SQE itself. But in the USA, the American Uniform Bar Exam was passed by GPT4 in 2023, outperforming some human candidates.

    If a chatbot can ace your professional qualifying exam, is that exam fit for purpose? In other disciplines, the same question arises. Should medical students be assessed on their recall of rare diseases? Should business students be tested on their SWOT analyses? Should accounting students analyse corporate accounts? Should engineers calculate stress tolerances manually? All of these things can be completed by AI.

    Moonshots

    Regulatory bodies, universities and employers need to come together more than ever to seriously engage with what AI competency might look like – both in the workplace and the lecture theatre. Taking the approach of some regulators and insisting on in-person exams to prepare students for an industry entirely lacking in exams probably is not it. What does it mean to be an ethical, educated and adaptable professional in the age of AI?

    The HE sector urgently needs to move beyond discussions about whether or not students should be allowed to use AI. It is here, it is getting more powerful, and it is never leaving. Instead, we need to focus on how we assess in a world where AI is always on tap. If we cannot tell the difference between AI-generated work and student-generated work (and increasingly we cannot) then we need to shift our focus towards the process of learning rather than the outputs. Many institutions have made strides in this direction, using reflective journals, project-based learning and assessments which reward students for their ability to question, think, explain and justify their answers.

    This is likely to mean increased emphasis on live assessments – advocacy, negotiations, client interviews or real-world clinical experience. In other disciplines too, simulations, inter- and multi-disciplinary challenges, or industry-related authentic assessments. These are nothing revolutionary, they are pedagogically sound and all have been successfully implemented. They do, however, demand more of us as academics. More time, more support, more creativity. Scaling up from smaller modules to large cohorts is not an easy feat. It is much easier to keep doubling-down on what we already do, and hiding behind regulatory frameworks. However, we need to do these things (to quote JFK)

    not because they are easy, but because they are hard, because that goal will serve to organize and measure the best of our energies and skills, because that challenge is one that we are willing to accept, one we are unwilling to postpone.

    In law schools, how many of us teach students how to use legal technology, how to understand algorithmic biases, or how to critically assess AI-generated legal advice? How many business schools teach students how to work alongside AI? How many medical schools give students the opportunity to learn how to critically interpret AI-generated diagnostics? The concept of “digital professionalism” – the ability to effectively and ethically use AI in a professional setting – is becoming a core graduate-level skill.

    If universities fail to take the lead on this, then private providers will be eager, and quick, to fill the void. We already have short courses, boot camps, and employer-led schemes which offer industry-tailored AI literacy programmes – and if universities start to look outdated and slow to adapt, students will vote with their feet.

    Invention and reinvention

    However, AI is not necessarily the enemy. Like all technological advances it is essentially nothing more than a tool. As with all tools – the stone axe, the printing press, the internet – it brings with it threats to some and opportunities for others. We have identified some of the threats but also the opportunities that (with proper use), AI can bring – enhanced learning, deeper engagement, and democratisation of access to knowledge. Like the printing press, the real threat faced by HE is not the tool, but a failure to adapt to it. Nonetheless, a surprising number of academics are dusting off their metaphorical sabots to try and stop the development of AI.

    We should be working with the relevant sector and regulator and asking ourselves how we can adapt our courses and use AI to support, rather than substitute, genuine learning. We have an opportunity to teach students how to move away from being consumers of AI outputs, and how to become critical users, questioners and collaborators. We need to stop being reactive to AI – after all, it is developing faster than we can ever do.

    Instead, we need to move towards reinvention. This could mean: embedding AI literacy in all disciplines; refocusing assessments to require more creative, empathetic, adaptable and ethical skills; preparing students and staff to work alongside AI, not to fear it; and closer collaboration with professional regulators.

    AI is being used in many professions, and the use will inevitably grow significantly over the next few years. Educators, regulators and employers need to work even more closely together to prepare students for this new world. Garfield Law is (currently) a one-off, and while it might be tempting to dismiss the development as tokenistic gimmickry, it is more than that.

    Professional courses are standing on the top of a diving board. We can choose obsolescence and climb back down, clinging to outdated practices and condemn ourselves to irrelevance. Or, we can choose opportunity and dive in to a more dynamic, responsive and human vision of professional learning.

    We just have to be brave enough to take the plunge.

    Source link

  • What should the higher education sector do about AI fatigue?

    What should the higher education sector do about AI fatigue?

    Raise the topic of AI in education for discussion these days and you can feel the collective groan in the room.

    Sometimes I even hear it. We’re tired, I get it. Many students are too. But if we don’t keep working creatively to address the disruption to education posed by AI – if we just wait and see how it plays out – it will be too late.

    AI fatigue is many things

    There are a few factors at play, from an AI literacy divide, to simply talking past each other.

    AI literacy is nearly unmanageable. The complexity of AI in education, exacerbated by the pace of technological change, makes AI “literacy” very difficult to define, let alone attain. Educators represent a wide range of experience levels and conceptual frames, as well as differing opinions on the power, quality, opportunity, and risk of generative AI.

    One person will see AI as a radical first step in an intelligence revolution; the next will dismiss it as “mostly rubbish” and minimise the value discussing it at all. And, as far as I have found, there is no leading definition of AI literacy to date. Some people don’t even like the term literacy.

    Our different conceptual frames compete with each other. Many disciplines and conceptual orientations are trying to talk together, each with their own assumptions and incentives. In any given space, we have the collision of expert with novice, entrepreneur with critic, sceptic with optimist, reductionist with holist… and the list goes on.

    We tend to silo and specialise. Because it is difficult to become comprehensively literate in generative AI (and its related issues), many adopt a narrow focus and stick with that: assessment design, academic integrity, authorship, cognitive offloading, energy consumption, bias, labour ethics, and others. Meetings take on the character of debates. At the very least, discussions of AI are time-consuming, as each focus seems to need airing every day.

    We feel grief for what we may be losing: human authorship, agency, status, and a whole range of normative relational behaviours. A colleague recently told me how sad she feels marking student work. Authorship, for example, is losing coherence as a category or shared value, which can be surreal and dispiriting for both writers and readers. AI’s disruption brings a deeply challenging emotional experience that’s rarely discussed.

    We are under-resourced. Institutions have been slow to roll out policy, form working groups, provide training, or fund staff time to research, prepare, plan, and design responses. It’s a daunting task to just keep up with, let alone get ahead of, Silicon Valley. Unfortunately, the burden is largely borne by individuals.

    The AI elephant in the room

    Much of the sector suffers from the wishful thinking that AI is “mostly rubbish”, not likely to change things much, or simply an annoyance. Many educators haven’t thought through how AI technologies may lead our societies and our education systems to change radically and quickly, and that these changes may impact the psychology of learning and teaching, not to mention the entire infrastructure of education. We talk past each other.

    Silicon Valley is openly pursuing artificial general intelligence (AGI), or something like that. Imagine a ChatGPT that can do your job, my job, and a big piece of the knowledge-work jobs recent graduates may hope to enter. Some insiders think this could arrive by 2027.

    A few weeks ago, Dario Amodai, CEO of AI company Anthropic, wrote his prediction that 50 per cent of entry-level office jobs could vanish within the next couple of years, and that unemployment overall could hit 20 per cent. This could be mostly hype or confirmation bias among the tech elite. But IBM, Klarna, and Duolingo have already cited AI-linked efficiencies in recent layoffs.

    Whether these changes take two years, or five, or even ten, it’s on the radar. So, let’s pause and imagine it. What happens to a generation of young people who perceive increasing job scarcity, and options and social purpose?

    Set aside, for now, what this means for cities, mental health, or the social fabric. What does it mean for higher education – especially if a university degree no longer holds the value it once promised? How should HE respond?

    Responding humanely

    I propose we respond with compassion, humanity… and something like a plan. What does this look like? Let me suggest a few possibilities.

    The sector works together. Imagine this: a consortium of institutions gathers together a resource base and discussion space (not social media) for AI in education. It respects diversity of positions and conceptual frames but also aims for a coherent and pragmatic working ethos that helps institutions and individuals make decisions. It drafts a change management plan for the sector, embracing adaptive management to create frameworks to support institutions to respond quickly, intelligently, flexibly, and humanely to the instability. It won’t resolve all the mess into a coherent solution, but it could provide a more stable framework for change. And lift the burden on thousands of us who feel we are reinventing the wheel every day.

    Institutions take action. Leading institutions embrace big discussions around the future of society, work, and education. They show a staunch willingness to face the risks and opportunities ahead, they devote resources to the project, and they take actions that support both staff and students to navigate change thoughtfully.

    Individuals and small groups are empowered to respond creatively. Supported by the sector and their HEIs, they collaborate to keep each other motivated, check each other on the hype, and find creative new avenues for teaching and learning. We solve problems for today while holding space for the messy discussions, speculate on future developments, and experiment with education in a changing world.

    So sector leaders, please help us find some degree of convergence or coherence; institutions, please take action to resource and support your staff and students; and individuals, let’s work together to do something good.

    With leadership, action, and creative collaboration, we may just find the time and energy to build new language and vision for the strange landscape we have entered, to experiment safely with new models of knowledge creation and authorship, and to discover new capacities for self-knowledge and human value.

    So groan, yes – I groan with you. And breathe – I’ll go along with that too. And then, let’s see what we can build.

    Source link

  • Careers services can help students avoid making decisions based on AI fears

    Careers services can help students avoid making decisions based on AI fears

    How students use AI tools to improve their chances of landing a job has been central to the debate around AI and career advice and guidance. But there has been little discussion about AI’s impact on students’ decision making about which jobs and sectors they might enter.

    Jisc has recently published two studies that shine light on this area. Prospects at Jisc’s Early Careers Survey is an annual report that charts the career aspirations and experiences of more than 4,000 students and graduates over the previous 12 months. For the first time, the survey’s dominant theme was the normalisation of the use of AI tools and the influence that discourse around AI is having on career decision making. And the impact of AI on employability was also a major concern of Jisc’s Student Perceptions of AI Report 2025, based on in-depth discussions with over 170 students across FE and HE.

    Nerves jangling

    The rapid advancements in AI raise concerns about its long-term impact, the jobs it might affect, and the skills needed to compete in a jobs market shaped by AI. These uncertainties can leave students and graduates feeling anxious and unsure about their future career prospects.

    Important career decisions are already being made based on perceptions of how AI may change work. The Early Careers Survey found that one in ten students had already changed their career path because of AI.

    Plans were mainly altered because students feared that their chosen career was at risk of automation, anticipating fewer roles in certain areas and some jobs becoming phased out entirely. Areas such as coding, graphic design, legal, data science, film and art were frequently mentioned, with creative jobs seen as more likely to become obsolete.

    However, it is important not to carried away on a wave of pessimism. Respondents were also pivoting to future-proof their careers. Many students see huge potential in AI, opting for careers that make use of the new technology or those that AI has helped create.

    But whether students see AI as an opportunity or a threat, the role of university careers and employability teams is the same in both cases. How do we support students in making informed decisions that are right for them?

    From static to electricity

    In today’s AI-driven landscape, careers services must evolve to meet a new kind of uncertainty. Unlike previous transitions, students now face automation anxiety, career paralysis, and fears of job displacement. This demands a shift away from static, one-size-fits-all advice toward more personalised, future-focused guidance.

    What’s different is the speed and complexity of change. Students are not only reacting to perceived risks but also actively exploring AI-enhanced roles. Careers practitioners should respond by embedding AI literacy, encouraging critical evaluation of AI-generated advice, and collaborating with employers to help students understand the evolving world of work.

    Equity must remain central. Not all students have equal access to digital tools or confidence in using them. Guidance must be inclusive, accessible, and responsive to diverse needs and aspirations.

    Calls to action should involve supporting students in developing adaptability, digital fluency, and human-centred skills like creativity and communication. Promote exploration over avoidance, and values-based decision-making over fear, helping students align career choices with what matters most to them.

    Ultimately, careers professionals are not here to predict the future, but to empower all students and early career professionals to shape it with confidence, curiosity, and resilience.

    On the balance beam

    This isn’t the first time that university employability teams have had to support students through change, anxiety, uncertainty or even decision paralysis when it comes to career planning, but the driver is certainly new. Through this uncertainty and transition, students and graduates need guidance from everyone who supports them, in education and the workplace.

    Collaborating with industry leaders and employers is key to ensuring students understand the AI-enhanced labour market, the way work is changing and that relevant skills are developed. Embedding AI literacy in the curriculum helps students develop familiarity and understand the opportunities as well as limitations. Jisc has launched an AI Literacy Curriculum for Teaching and Learning Staff to support this process.

    And promoting a balanced approach to career research and planning is important. The Early Careers Survey found almost a fifth of respondents are using generative AI tools like ChatGPT and Microsoft Copilot as a source of careers advice, and the majority (84 per cent) found them helpful.

    While careers and employability staff welcome the greater reach and impact AI enables, particularly in challenging times for the HE sector, colleagues at an AGCAS event were clear to emphasise the continued necessity for human connection, describing AI as “augmenting our service, not replacing it.”

    We need to ensure that students understand how to use AI tools effectively, spot when the information provided is outdated or incorrect, and combine them with other resources to ensure they get a balanced and fully rounded picture.

    Face-to-face interaction – with educators, employers and careers professionals – provides context and personalised feedback and discussion. A focus on developing essential human skills such as creativity, critical thinking and communication remains central to learning. After all, AI doesn’t just stand for artificial intelligence. It also means authentic interaction, the foundation upon which the employability experience is built.

    Guiding students through AI-driven change requires balanced, informed career planning. Careers services should embed AI literacy, collaborate with employers, and increase face-to-face support that builds human skills like creativity and communication. Less emphasis should be placed on one-size-fits-all advice and static labour market forecasting. Instead, the focus should be on active, student-centred approaches. Authentic interaction remains key to helping students navigate uncertainty with confidence and clarity.

    Source link

  • How Students Use Generative AI Beyond Writing – Faculty Focus

    How Students Use Generative AI Beyond Writing – Faculty Focus

    Source link

  • How educators can use Gen AI to promote inclusion and widen access

    How educators can use Gen AI to promote inclusion and widen access

    by Eleni Meletiadou

    Introduction

    Higher education faces a pivotal moment as Generative AI becomes increasingly embedded within academic practice. While AI technologies offer the potential to personalize learning, streamline processes, and expand access, they also risk exacerbating existing inequalities if not intentionally aligned with inclusive values. Building on our QAA-funded project outputs, this blog outlines a strategic framework for deploying AI to foster inclusion, equity, and ethical responsibility in higher education.

    The digital divide and GenAI

    Extensive research shows that students from marginalized backgrounds often face barriers in accessing digital tools, digital literacy training, and peer networks essential for technological confidence. GenAI exacerbates this divide, demanding not only infrastructure (devices, subscriptions, internet access) but also critical AI literacy. According to previous research, students with higher AI competence outperform peers academically, deepening outcome disparities.

    However, the challenge is not merely technological; it is social and structural. WP (Widening Participation) students often remain outside informal digital learning communities where GenAI tools are introduced and shared. Without intervention, GenAI risks becoming a “hidden curriculum” advantage for already-privileged groups.

    A framework for inclusive GenAI adoption

    Our QAA-funded “Framework for Educators” proposes five interrelated principles to guide ethical, inclusive AI integration:

    • Understanding and Awareness Foundational AI literacy must be prioritized. Awareness campaigns showcasing real-world inclusive uses of AI (eg Otter.ai for students with hearing impairments) and tiered learning tracks from beginner to advanced levels ensure all students can access, understand, and critically engage with GenAI tools.
    • Inclusive Collaboration GenAI should be used to foster diverse collaboration, not reinforce existing hierarchies. Tools like Miro and DeepL can support multilingual and neurodiverse team interactions, while AI-powered task management (eg Notion AI) ensures equitable participation. Embedding AI-driven teamwork protocols into coursework can normalize inclusive digital collaboration.
    • Skill Development Higher-order cognitive skills must remain at the heart of AI use. Assignments that require evaluating AI outputs for bias, simulating ethical dilemmas, and creatively applying AI for social good nurture critical thinking, problem-solving, and ethical awareness.
    • Access to Resources Infrastructure equity is critical. Universities must provide free or subsidized access to key AI tools (eg Grammarly, ReadSpeaker), establish Digital Accessibility Centers, and proactively support economically disadvantaged students.
    • Ethical Responsibility Critical AI literacy must include an ethical dimension. Courses on AI ethics, student-led policy drafting workshops, and institutional AI Ethics Committees empower students to engage responsibly with AI technologies.

    Implementation strategies

    To operationalize the framework, a phased implementation plan is recommended:

    • Phase 1: Needs assessment and foundational AI workshops (0–3 months).
    • Phase 2: Pilot inclusive collaboration models and adaptive learning environments (3–9 months).
    • Phase 3: Scale successful practices, establish Ethics and Accessibility Hubs (9–24 months).

    Key success metrics include increased AI literacy rates, participation from underrepresented groups, enhanced group project equity, and demonstrated critical thinking skill growth.

    Discussion: opportunities and risks

    Without inclusive design, GenAI could deepen educational inequalities, as recent research warns. Students without access to GenAI resources or social capital will be disadvantaged both academically and professionally. Furthermore, impersonal AI-driven learning environments may weaken students’ sense of belonging, exacerbating mental health challenges.

    Conversely, intentional GenAI integration offers powerful opportunities. AI can personalize support for students with diverse learning needs, extend access to remote or rural learners, and reduce administrative burdens on staff – freeing them to focus on high-impact, relational work such as mentoring.

    Conclusion

    The future of inclusive higher education depends on whether GenAI is adopted with a clear commitment to equity and social justice. As our QAA project outputs demonstrate, the challenge is not merely technological but ethical and pedagogical. Institutions must move beyond access alone, embedding critical AI literacy, equitable resource distribution, community-building, and ethical responsibility into every stage of AI adoption.

    Generative AI will not close the digital divide on its own. It is our pedagogical choices, strategic designs, and values-driven implementations that will determine whether the AI-driven university of the future is one of exclusion – or transformation.

    This blog is based on the recent outputs from our QAA-funded project entitled: “Using AI to promote education for sustainable development and widen access to digital skills”

    Dr Eleni Meletiadou is an Associate Professor (Teaching) at London Metropolitan University  specialising in Equity, Diversity, and Inclusion (EDI), AI, inclusive digital pedagogy, and multilingual education. She leads the Education for Social Justice and Sustainable Learning and Development (RILEAS) and the Gender Equity, Diversity, and Inclusion (GEDI) Research Groups. Dr Meletiadou’s work, recognised with the British Academy of Management Education Practice Award (2023), focuses on transforming higher education curricula to promote equitable access, sustainability, and wellbeing. With over 15 years of international experience across 35 countries, she has led numerous projects in inclusive assessment and AI-enhanced learning. She is a Principal Fellow of the Higher Education Academy and serves on several editorial boards. Her research interests include organisational change, intercultural communication, gender equity, and Education for Sustainable Development (ESD). She actively contributes to global efforts in making education more inclusive and future-ready. LinkedIn: https://www.linkedin.com/in/dr-eleni-meletiadou/

    Author: SRHE News Blog

    An international learned society, concerned with supporting research and researchers into Higher Education

    Source link