Category: AI

  • Can AI Keep Students Motivated, Or Does it Do the Opposite? – The 74

    Can AI Keep Students Motivated, Or Does it Do the Opposite? – The 74

    Imagine a student using a writing assistant powered by a generative AI chatbot. As the bot serves up practical suggestions and encouragement, insights come more easily, drafts polish up quickly and feedback loops feel immediate. It can be energizing. But when that AI support is removed, some students report feeling less confident or less willing to engage.

    These outcomes raise the question: Can AI tools genuinely boost student motivation? And what conditions can make or break that boost?

    As AI tools become more common in classroom settings, the answers to these questions matter a lot. While tools for general use such as ChatPGT or Claude remain popular, more and more students are encountering AI tools that are purpose-built to support learning, such as Khan Academy’s Khanmigo, which personalizes lessons. Others, such as ALEKS, provide adaptive feedback. Both tools adjust to a learner’s level and highlight progress over time, which helps students feel capable and see improvement. But there are still many unknowns about the long-term effects of these tools on learners’ progress, an issue I continue to study as an educational psychologist.

    What the evidence shows so far

    Recent studies indicate that AI can boost motivation, at least for certain groups, when deployed under the right conditions. A 2025 experiment with university students showed that when AI tools delivered a high-quality performance and allowed meaningful interaction, students’ motivation and their confidence in being able to complete a task – known as self-efficacy – increased.

    For foreign language learners, a 2025 study found that university students using AI-driven personalized systems took more pleasure in learning and had less anxiety and more self-efficacy compared with those using traditional methods. A recent cross-cultural analysis with participants from Egypt, Saudi Arabia, Spain and Poland who were studying diverse majors suggested that positive motivational effects are strongest when tools prioritize autonomy, self-direction and critical thinking. These individual findings align with a broader, systematic review of generative AI tools that found positive effects on student motivation and engagement across cognitive, emotional and behavioral dimensions.

    A forthcoming meta-analysis from my team at the University of Alabama, which synthesized 71 studies, echoed these patterns. We found that generative AI tools on average produce moderate positive effects on motivation and engagement. The impact is larger when tools are used consistently over time rather than in one-off trials. Positive effects were also seen when teachers provide scaffolding, when students maintain agency in how they use the tool, and when the output quality is reliable.

    But there are caveats. More than 50 of the studies we reviewed did not draw on a clear theoretical framework of motivation, and some used methods that we found were weak or inappropriate. This raises concerns about the quality of the evidence and underscores how much more careful research is needed before one can say with confidence that AI nurtures students’ intrinsic motivation rather than just making tasks easier in the moment.

    When AI backfires

    There is also research that paints a more sobering picture. A large study of more than 3,500 participants found that while human–AI collaboration improved task performance, it reduced intrinsic motivation once the AI was removed. Students reported more boredom and less satisfaction, suggesting that overreliance on AI can erode confidence in their own abilities.

    Another study suggested that while learning achievement often rises with the use of AI tools, increases in motivation are smaller, inconsistent or short-lived. Quality matters as much as quantity. When AI delivers inaccurate results, or when students feel they have little control over how it is used, motivation quickly erodes. Confidence drops, engagement fades and students can begin to see the tool as a crutch rather than a support. And because there are not many long-term studies in this field, we still do not know whether AI can truly sustain motivation over time, or whether its benefits fade once the novelty wears off.

    Not all AI tools work the same way

    The impact of AI on student motivation is not one-size-fits-all. Our team’s meta-analysis shows that, on average, AI tools do have a positive effect, but the size of that effect depends on how and where they are used. When students work with AI regularly over time, when teachers guide them in using it thoughtfully, and when students feel in control of the process, the motivational benefits are much stronger.

    We also saw differences across settings. College students seemed to gain more than younger learners, STEM and writing courses tended to benefit more than other subjects, and tools designed to give feedback or tutoring support outperformed those that simply generated content.

    There is also evidence that general-use tools like ChatGPT or Claude do not reliably promote intrinsic motivation or deeper engagement with content, compared to learning-specific platforms such as ALEKS and Khanmigo, which are more effective at supporting persistence and self-efficacy. However, these tools often come with subscription or licensing costs. This raises questions of equity, since the students who could benefit most from motivational support may also be the least likely to afford it.

    These and other recent findings should be seen as only a starting point. Because AI is so new and is changing so quickly, what we know today may not hold true tomorrow. In a paper titled The Death and Rebirth of Research in Education in the Age of AI, the authors argue that the speed of technological change makes traditional studies outdated before they are even published. At the same time, AI opens the door to new ways of studying learning that are more participatory, flexible and imaginative. Taken together, the data and the critiques point to the same lesson: Context, quality and agency matter just as much as the technology itself.

    Why it matters for all of us

    The lessons from this growing body of research are straightforward. The presence of AI does not guarantee higher motivation, but it can make a difference if tools are designed and used with care and understanding of students’ needs. When it is used thoughtfully, in ways that strengthen students’ sense of competence, autonomy and connection to others, it can be a powerful ally in learning.

    But without those safeguards, the short-term boost in performance could come at a steep cost. Over time, there is the risk of weakening the very qualities that matter most – motivation, persistence, critical thinking and the uniquely human capacities that no machine can replace.

    For teachers, this means that while AI may prove a useful partner in learning, it should never serve as a stand-in for genuine instruction. For parents, it means paying attention to how children use AI at home, noticing whether they are exploring, practicing and building skills or simply leaning on it to finish tasks. For policymakers and technology developers, it means creating systems that support student agency, provide reliable feedback and avoid encouraging overreliance. And for students themselves, it is a reminder that AI can be a tool for growth, but only when paired with their own effort and curiosity.

    Regardless of technology, students need to feel capable, autonomous and connected. Without these basic psychological needs in place, their sense of motivation will falter – with or without AI.

    This article is republished from The Conversation under a Creative Commons license. Read the original article.

    Source link

  • Who gets to decide what counts as knowledge? Big tech, AI, and the future of epistemic agency in higher education

    Who gets to decide what counts as knowledge? Big tech, AI, and the future of epistemic agency in higher education

    by Mehreen Ashraf, Eimear Nolan, Manual F Ramirez, Gazi Islam and Dirk Lindebaum

    Walk into almost any university today, and you can be sure to encounter the topic of AI and how it affects higher education (HE). AI applications, especially large language models (LLM), have become part of everyday academic life, being used for drafting outlines, summarising readings, and even helping students to ‘think’. For some, the emergence of LLMs is a revolution that makes learning more efficient and accessible. For others, it signals something far more unsettling: a shift in how and by whom knowledge is controlled. This latter point is the focus of our new article published in Organization Studies.

    At the heart of our article is a shift in what is referred to epistemic (or knowledge) governance: the way in which knowledge is created, organised, and legitimised in HE. In plain terms, epistemic governance is about who gets to decide what counts as credible, whose voices are heard, and how the rules of knowing are set. Universities have historically been central to epistemic governance through peer review, academic freedom, teaching, and the public mission of scholarship. But as AI tools become deeply embedded in teaching and research, those rules are being rewritten not by educators or policymakers, but by the companies that own the technology.

    From epistemic agents to epistemic consumers

    Universities, academics, and students have traditionally been epistemic agents: active producers and interpreters of knowledge. They ask questions, test ideas, and challenge assumptions. But when we rely on AI systems to generate or validate content, we risk shifting from being agents of knowledge to consumers of knowledge. Technology takes on the heavy cognitive work: it finds sources, summarises arguments, and even produces prose that sounds academic. However, this efficiency comes at the cost of profound changes in the nature of intellectual work.

    Students who rely on AI to tidy up their essays, or generate references, will learn less about the process of critically evaluating sources, connecting ideas and constructing arguments, which are essential for reasoning through complex problems. Academics who let AI draft research sections, or feed decision letters and reviewer reports into AI with the request that AI produces a ‘revision strategy’, might save time but lose the slow, reflective process that leads to original thought, while undercutting their own agency in the process. And institutions that embed AI into learning systems hand part of their epistemic governance – their authority to define what knowledge is and how it is judged – to private corporations.

    This is not about individual laziness; it is structural. As Shoshana Zuboff argued in The age of surveillance capitalism, digital infrastructures do not just collect information, they reorganise how we value and act upon it. When universities become dependent on tools owned by big tech, they enter an ecosystem where the incentives are commercial, not educational.

    Big tech and the politics of knowing

    The idea that universities might lose control of knowledge sounds abstract, but it is already visible. Jisc’s 2024 framework on AI in tertiary education warns that institutions must not ‘outsource their intellectual labour to unaccountable systems,’ yet that outsourcing is happening quietly. Many UK universities, including the University of Oxford, have signed up to corporate AI platforms to be used by staff and students alike. This, in turn, facilitates the collection of data on learning behaviours that can be fed back into proprietary models.

    This data loop gives big tech enormous influence over what is known and how it is known. A company’s algorithm can shape how research is accessed, which papers surface first, or which ‘learning outcomes’ appear most efficient to achieve. That’s epistemic governance in action: the invisible scaffolding that structures knowledge behind the scenes. At the same time, it is easy to see why AI technologies appeal to universities under pressure. AI tools promise speed, standardisation, lower costs, and measurable performance, all seductive in a sector struggling with staff shortages and audit culture. But those same features risk hollowing out the human side of scholarship: interpretation, dissent, and moral reasoning. The risk is not that AI will replace academics but that it will change them, turning universities from communities of inquiry into systems of verification.

    The Humboldtian ideal and why it is still relevant

    The modern research university was shaped by the 19th-century thinker Wilhelm von Humboldt, who imagined higher education as a public good, a space where teaching and research were united in the pursuit of understanding. The goal was not efficiency: it was freedom. Freedom to think, to question, to fail, and to imagine differently.

    That ideal has never been perfectly achieved, but it remains a vital counterweight to market-driven logics that render AI a natural way forward in HE. When HE serves as a place of critical inquiry, it nourishes democracy itself. When it becomes a service industry optimised by algorithms, it risks producing what Žižek once called ‘humans who talk like chatbots’: fluent, but shallow.

    The drift toward organised immaturity

    Scholars like Andreas Scherer and colleagues describe this shift as organised immaturity: a condition where sociotechnical systems prompt us to stop thinking for ourselves. While AI tools appear to liberate us from labour, what is happening is that they are actually narrowing the space for judgment and doubt.

    In HE, that immaturity shows up when students skip the reading because ‘ChatGPT can summarise it’, or when lecturers rely on AI slides rather than designing lessons for their own cohort. Each act seems harmless; but collectively, they erode our epistemic agency. The more we delegate cognition to systems optimised for efficiency, the less we cultivate the messy, reflective habits that sustain democratic thinking. Immanuel Kant once defined immaturity as ‘the inability to use one’s understanding without guidance from another.’ In the age of AI, that ‘other’ may well be an algorithm trained on millions of data points, but answerable to no one.

    Reclaiming epistemic agency

    So how can higher education reclaim its epistemic agency? The answer lies not only in rejecting AI but also in rethinking our possible relationships with it. Universities need to treat generative tools as objects of inquiry, not an invisible infrastructure. That means embedding critical digital literacy across curricula: not simply training students to use AI responsibly, but teaching them to question how it works, whose knowledge it privileges, and whose it leaves out.

    In classrooms, educators could experiment with comparative exercises: have students write an essay on their own, then analyse an AI version of the same task. What’s missing? What assumptions are built in? How were students changed when the AI wrote the essay for them and when they wrote them themselves? As the Russell Group’s 2024 AI principles note, ‘critical engagement must remain at the heart of learning.’

    In research, academics too must realise that their unique perspectives, disciplinary judgement, and interpretive voices matter, perhaps now more than ever, in a system where AI’s homogenisation of knowledge looms. We need to understand that the more we subscribe to values of optimisation and efficiency as preferred ways of doing academic work, the more natural the penetration of AI into HE will unfold.

    Institutionally, universities might consider building open, transparent AI systems through consortia, rather than depending entirely on proprietary tools. This isn’t just about ethics; it’s about governance and ensuring that epistemic authority remains a public, democratic responsibility.

    Why this matters to you

    Epistemic governance and epistemic agency may sound like abstract academic terms, but they refer to something fundamental: the ability of societies and citizens (not just ‘workers’) to think for themselves when/if universities lose control over how knowledge is created, validated and shared. When that happens, we risk not just changing education but weakening democracy. As journalist George Monbiot recently wrote, ‘you cannot speak truth to power if power controls your words.’ The same is true for HE. We cannot speak truth to power if power now writes our essays, marks our assignments, and curates our reading lists.

    Mehreen Ashraf is an Assistant Professor at Cardiff Business School, Cardiff University, United Kingdom.

    Eimear Nolan is an Associate Professor in International Business at Trinity Business School, Trinity College Dublin, Ireland.

    Manuel F Ramirez is Lecturer in Organisation Studies at the University of Liverpool Management School, UK.

    Gazi Islam is Professor of People, Organizations and Society at Grenoble Ecole de Management, France.

    Dirk Lindebaum is Professor of Management and Organisation at the School of Management, University of Bath.

    Author: SRHE News Blog

    An international learned society, concerned with supporting research and researchers into Higher Education

    Source link

  • Higher education needs a plan in place for student “pastoral” use of AI

    Higher education needs a plan in place for student “pastoral” use of AI

    With 18 per cent of students reporting mental health difficulties, a figure which has tripled in just seven years, universities are navigating a crisis.

    The student experience can compound many of the risk factors for poor mental health – from managing constrained budgets and navigating the cost of learning crisis, to moving away from established support systems, and balancing high-stakes assessment with course workload and part-time work.

    In response, universities provide a range of free support services, including counselling and wellbeing provision, alongside specialist mental health advisory services. But if we’re honest, these services are under strain. Despite rising expenditure, they’re still often under-resourced, overstretched, and unable to keep pace with growing demand. With staff-student ratios at impossible levels and wait times for therapeutic support often exceeding ten weeks, some students are turning to alternatives for more immediate care.

    And in this void, artificial intelligence is stepping in. While ChatGPT-written essays dominate the sector’s AI discussions, the rise of “pastoral AI” highlights a far more urgent and overlooked AI use case – with consequences more troubling than academic misconduct.

    Affective conversations

    For the uninitiated, the landscape of “affective” or “pastoral” AI is broad. Mainstream tools like Microsoft’s Copilot or OpenAI’s ChatGPT are designed for productivity, not emotional support. Yet research suggests that users increasingly turn to them for exactly that – seeking help with breakups, mental health advice, and other life challenges, as well as essay writing. While affective conversations may account for only a small proportion of overall use (under three per cent in some studies), the full picture is poorly understood.

    Then there are AI “companions” such as Replika or Character.AI – chatbots built specifically for affective use. These are optimised to listen, respond with empathy, offer intimacy, and provide virtual friendship, confidants, or even “therapy”.

    This is not a fringe phenomenon. Replika claims over 25 million users, while Snapchat’s My AI counts more than 150 million. The numbers are growing fast. As the affective capacity of these tools improves, they are becoming some of the most popular and intensively used forms of generative AI – and increasingly addictive.

    A recent report found that users spend an average of 86 minutes a day with AI companions – more than on Instagram or YouTube, and not far behind TikTok. These bots are designed to keep users engaged, often relying on sycophantic feedback loops that affirm worldviews regardless of truth or ethics. Because large language models are trained in part through human feedback, its output is often highly sycophantic – “agreeable” responses which are persuasive and pleasing – but these can become especially risky in emotionally charged conversations, especially with vulnerable users.

    Empathy optimisations

    For students already experiencing poor mental health, the risks are acute. Evidence is emerging that these engagement-at-all-costs chatbots rarely guide conversations to a natural resolution. Instead, their sycophancy can fuel delusions, amplify mania, or validate psychosis.

    Adding to these concerns, legal cases and investigative reporting are surfacing deeply troubling examples: chatbots encouraging violence, sending unsolicited sexual content, reinforcing delusional thinking, or nudging users to buy them virtual gifts. One case alleged a chatbot encouraged a teenager to murder his parents after they restricted his screen time; another saw a chatbot advise a fictional recovering meth addict to take a “small hit” after a bad week. These are not outliers but the predictable by-products of systems optimised for empathy but unbound by ethics.

    And it’s young people who are engaging with them most. More than 70 per cent of companion app users are aged 18 to 35, and two-thirds of Character.AI’s users are 18 to 24 – the same demographic that makes up the majority of our student population.

    The potential harm here is not speculative. It is real and affecting students right now. Yet “pastoral” AI use remains almost entirely absent from higher education’s AI conversations. That is a mistake. With lawsuits now spotlighting cases of AI “encouraged” suicides among vulnerable young people – many of whom first encountered AI through academic use – the sector cannot afford to ignore this.

    Paint a clearer picture

    Understanding why students turn to AI for pastoral support might help. Reports highlight loneliness and vulnerability as key indicators. One found that 17 per cent of young people valued AI companions because they were “always available,” while 12 per cent said they appreciated being able to share things they could not tell friends or family. Another reported that 12 per cent of young people were using chatbots because they had no one else to talk to – a figure that rose to 23 per cent among vulnerable young people, who were also more likely to use AI for emotional support or therapy.

    We talk often about belonging as the cornerstone of student success and wellbeing – with reducing loneliness a key measure of institutional effectiveness. Pastoral AI use suggests policymakers may have much to learn from this agenda. More thinking is needed to understand why the lure of an always-available, non-judgemental digital “companion” feels so powerful to our students – and what that tells us about our existing support.

    Yet AI discussions in higher education remain narrowly focused, on academic integrity and essay writing. Our evidence base reflects this: the Student Generative AI Survey – arguably the best sector-wide tool we have – gives little attention to pastoral or wellbeing-related uses. The result is, however, that data remains fragmented and anecdotal on this area of significant risk. Without a fuller sector-specific understanding of student pastoral AI use, we risk stalling progress on developing effective, sector-wide strategies.

    This means institutions need to start a different kind of AI conversation – one grounded in ethics, wellbeing, and emotional care. It will require drawing on different expertise: not just academics and technologists, but also counsellors, student services staff, pastoral advisers, and mental health professionals. These are the people best placed to understand how AI is reshaping the emotional lives of our students.

    Any serious AI strategy must recognise that students are turning to these tools not just for essays, but for comfort and belonging too, and we must offer something better in return.

    If some of our students find it easier to confide in chatbots than in people, we need to confront what that says about the accessibility and design of our existing support systems, and how we might improve and resource them. Building a pastoral AI strategy is less about finding a perfect solution, but more about treating pastoral AI seriously, as a mirror which reflects back at us student loneliness, vulnerabilities, and institutional support gaps. These reflections should push us to re-centre these experiences, to reimagine our pastoral support provision, into an image that’s genuinely and unapologetically human.

    Source link

  • How UNE trained an AI-literate workforce – Campus Review

    How UNE trained an AI-literate workforce – Campus Review

    Almost all employees at the University of New England (UNE) use AI each day to augment tasks, despite the wider sector slowly adopting the tech into its workforce.

    Please login below to view content or subscribe now.

    Membership Login

    Source link

  • Generic AI cannot capture higher education’s unwritten rules

    Generic AI cannot capture higher education’s unwritten rules

    Some years ago, I came across Walter Moberley’s The Crisis in the University. In the years after the Second World War, universities faced a perfect storm: financial strain, shifting student demographics, and a society wrestling with lost values. Every generation has its reckoning. Universities don’t just mirror the societies they serve – they help define what those societies might become.

    Today’s crisis looks very different. It isn’t about reconstruction or mass expansion. It’s about knowledge itself – how it is mediated and shaped in a world of artificial intelligence. The question is whether universities can hold on to their cultural distinctiveness once LLM-enabled workflows start to drive their daily operations.

    The unwritten rules

    Let’s be clear: universities are complicated beasts. Policies, frameworks and benchmarks provide a skeleton. But the flesh and blood of higher education live elsewhere – in the unwritten rules of culture.

    Anyone who has sat through a validation panel, squinted at the spreadsheets for a TEF submission, or tried to navigate an approval workflow knows what I mean. Institutions don’t just run on paperwork; they run on tacit understandings, corridor conversations and half-spoken agreements.

    These practices rarely make it into a handbook – nor should they – but they shape everything from governance to the student experience. And here’s the rub: large language models, however clever, can’t see what isn’t codified. Which means they can’t capture the very rules that make one university distinctive from another.

    The limits of generic AI

    AI is already embedded in the sector. We see it in student support chatbots, plagiarism detection, learning platforms, and back-office systems. But these tools are built on vast, generic datasets. They flatten nuance, reproduce bias and assume a one-size-fits-all worldview.

    Drop them straight into higher education and the risk is obvious: universities start to look interchangeable. An algorithm might churn out a compliant REF impact statement. But it won’t explain why Institution A counts one case study as transformative while Institution B insists on another, or why quality assurance at one university winds its way through a labyrinth of committees while at another it barely leaves the Dean’s desk. This isn’t just a technical glitch. It’s a governance risk. Allow external platforms to hard-code the rules of engagement and higher education loses more than efficiency – it loses identity, and with it agency.

    The temptation to automate is real. Universities are drowning in compliance. Office for Students returns, REF, KEF and TEF submissions, equality reporting, Freedom of Information requests, the Race Equality Charter, endless templates – the bureaucracy multiplies every year.

    Staff are exhausted. Worse, these demands eat into time meant for teaching, research and supporting students. Ministers talk about “cutting red tape,” but in practice the load only increases. Automation looks like salvation. Drafting policies, preparing reports, filling forms – AI can do all this faster and more cheaply.

    But higher education isn’t just about efficiency. It’s also about identity and purpose. If efficiency is pursued at the expense of culture, universities risk hollowing out the very things that make them distinctive.

    Institutional memory matters

    Universities are among the UK’s most enduring civic institutions, each with a long memory shaped by place. A faculty’s interpretation of QAA benchmarks, the way a board debates grade boundaries, the precedents that guide how policies are applied – all of this is institutional knowledge.

    Very little of it is codified. Sit in a Senate meeting or a Council away-day and you quickly see how much depends on inherited understanding. When senior staff leave or processes shift, that memory can vanish – which is why universities so often feel like they are reinventing the wheel.

    Here, human-assistive AI could play a role. Not by replacing people, but by capturing and transmitting tacit practices alongside the formal rulebook. Done well, that kind of LLM could preserve memory without erasing culture.

    So, what does “different” look like? The Turing Institute recently urged the academy to think about AI in relation to the humanities, not just engineering. My own experiments – from the Bernie Grant Archive LLM to a Business Case LLM and a Curriculum Innovation LLM – point in the same direction.

    The principles are clear. Systems should be co-designed with staff, reflecting how people actually work rather than imposing abstract process maps. They must be assistive, not directive – capable of producing drafts and suggestions but always requiring human oversight.

    They need to embed cultural nuance: keeping tone, tradition and tacit practice alive alongside compliance. That way outputs reflect the character of the institution, reinforcing its USP rather than erasing it. They should preserve institutional knowledge by drawing on archives and precedents to create a living record of decision-making. And they must build in error prevention, using human feedback loops to catch hallucinations and conceptual drift.

    Done this way, AI lightens the bureaucratic load without stripping out the culture and identity that make universities what they are.

    The sector’s inflection point

    So back to the existential question. It’s not whether to adopt AI – that ship has already sailed. The real issue is whether universities will let generic platforms reshape them in their image, or whether the sector can design tools that reflect its own values.

    And the timing matters. We’re heading into a decade of constrained funding, student number caps, and rising ministerial scrutiny. Decisions about AI won’t just be about efficiency – they will go to the heart of what kind of universities survive and thrive in this environment.

    If institutions want to preserve their distinctiveness, they cannot outsource AI wholesale. They must build and shape models that reflect their own ways of working – and collaborate across the sector to do so. Otherwise, the invisible knowledge that makes one university different from another will be drained away by automation.

    That means getting specific. Is AI in higher education infrastructure, pedagogy, or governance? How do we balance efficiency with the preservation of tacit knowledge? Who owns institutional memory once it’s embedded in AI – the supplier, or the university? Caveat emptor matters here. And what happens if we automate quality assurance without accounting for cultural nuance?

    These aren’t questions that can be answered in a single policy cycle. But they can’t be ducked either. The design choices being made now will shape not just efficiency, but the very fabric of universities for decades to come.

    The zeitgeist of responsibility

    Every wave of technology promises efficiency. Few pay attention to culture. Unless the sector intervenes, large language models will be no different.

    This is, in short, a moment of responsibility. Universities can co-design AI that reflects their values, reduces bureaucracy and preserves identity. Or they can sit back and watch as generic platforms erode the lifeblood of the sector, automating away the subtle rules that make higher education what it is.

    In 1989, at the start of my BBC career, I stood on the Berlin Wall and watched the world change before my eyes. Today, higher education faces a moment of similar magnitude. The choice is stark: be shapers and leaders, or followers and losers.

    Source link

  • We cannot address the AI challenge by acting as though assessment is a standalone activity

    We cannot address the AI challenge by acting as though assessment is a standalone activity

    How to design reliable, valid and fair assessment in an AI-infused world is one of those challenges that feels intractable.

    The scale and extent of the task, it seems, outstrips the available resource to deal with it. In these circumstances it is always worth stepping back to re-frame, perhaps reconceptualise, what the problem is, exactly. Is our framing too narrow? Have we succeeded (yet) in perceiving the most salient aspects of it?

    As an educational development professional, seeking to support institutional policy and learning and teaching practices, I’ve been part of numerous discussions within and beyond my institution. At first, we framed the problem as a threat to the integrity of universities’ power to reliably and fairly award degrees and to certify levels of competence. How do we safeguard this authority and credibly certify learning when the evidence we collect of the learning having taken place can be mimicked so easily? And the act is so undetectable to boot?

    Seen this way the challenge is insurmountable.

    But this framing positions students as devoid of ethical intent, love of learning for its own sake, or capacity for disciplined “digital professionalism”. It also absolves us of the responsibility of providing an education which results in these outcomes. What if we frame the problem instead as a challenge of AI to higher education practices as a whole and not just to assessment? We know the use of AI in HE ranges widely, but we are only just beginning to comprehend the extent to which it redraws the basis of our educative relationship with students.

    Rooted in subject knowledge

    I’m finding that some very old ideas about what constitutes teaching expertise and how students learn are illuminating: the very questions that expert teachers have always asked themselves are in fact newly pertinent as we (re)design education in an AI world. This challenge of AI is not as novel as it first appeared.

    Fundamentally, we are responsible for curriculum design which builds students’ ethical, intellectual and creative development over the course of a whole programme in ways that are relevant to society and future employment. Academic subject content knowledge is at the core of this endeavour and it is this which is the most unnerving part of the challenge presented by AI. I have lost count of the number of times colleagues have said, “I am an expert in [insert relevant subject area], I did not train for this” – where “this” is AI.

    The most resource-intensive need that we have is for an expansion of subject content knowledge: every academic who teaches now needs a subject content knowledge which encompasses a consideration of the interplay between their field of expertise and AI, and specifically the use of AI in learning and professional practice in their field.

    It is only on the basis of this enhanced subject content knowledge that we can then go on to ask: what preconceptions are my students bringing to this subject matter? What prior experience and views do they have about AI use? What precisely will be my educational purpose? How will students engage with this through a newly adjusted repertoire of curriculum and teaching strategies? The task of HE remains a matter of comprehending a new reality and then designing for the comprehension of others. Perhaps the difference now is that the journey of comprehension is even more collaborative and even less finite that it once would have seemed.

    Beyond futile gestures

    All this is not to say that the specific challenge of ensuring that assessment is valid disappears. A universal need for all learners is to develop a capacity for qualitative judgement and to learn to seek, interpret and critically respond to feedback about their own work. AI may well assist in some of these processes, but developing students’ agency, competence and ethical use of it is arguably a prerequisite. In response to this conundrum, some colleagues suggest a return to the in-person examination – even as a baseline to establish in a valid way levels of students’ understanding.

    Let’s leave aside for a moment the argument about the extent to which in-person exams were ever a valid way of assessing much of what we claimed. Rather than focusing on how we can verify students’ learning, let’s emphasise more strongly the need for students themselves to be in touch with the extent and depth of their own understanding, independently of AI.

    What if we reimagined the in-person high stakes summative examination as a low-stakes diagnostic event in which students test and re-test their understanding, capacity to articulate new concepts or design novel solutions? What if such events became periodic collaborative learning reviews? And yes, also a baseline, which assists us all – including students, who after all also have a vested interest – in ensuring that our assessments are valid.

    Treating the challenge of AI as though assessment stands alone from the rest of higher education is too narrow a frame – one that consigns us to a kind of futile authoritarianism which renders assessment practices performative and irrelevant to our and our students’ reality.

    There is much work to do in expanding subject content knowledge and in reimagining our curricula and reconfiguring assessment design at programme level such that it redraws our educative relationship with students. Assessment more than ever has to become a common endeavour rather than something we “provide” to students. A focus on how we conceptualise the trajectory of students’ intellectual, ethical and creative development is inescapable if we are serious about tackling this challenge in meaningful way.

    Source link

  • JCU vice-chancellor Simon Biggs – Campus Review

    JCU vice-chancellor Simon Biggs – Campus Review

    Vice-chancellor of James Cook University Simon Biggs said artificial intelligence is critical to help young people with companionship and loneliness.

    Please login below to view content or subscribe now.

    Membership Login

    Source link

  • AI-Fueled Fraud in Higher Education

    AI-Fueled Fraud in Higher Education

    Colleges across the United States are facing an alarming increase in “ghost students”—fraudulent applicants who infiltrate online enrollment systems, collect financial aid, and vanish before delivering any academic engagement. The problem, fueled by advances in artificial intelligence and weaknesses in identity verification processes, is undermining trust, misdirecting resources, and placing real students at risk.

    What Is a Ghost Student?

    A ghost student is not simply someone who drops out. These are fully fabricated identities—sometimes based on stolen personal information, sometimes entirely synthetic—created to fraudulently enroll in colleges. Fraudsters use AI tools to generate admissions essays, forge transcripts, and even produce deepfake images and videos for identity verification.

    Once enrolled, ghost students typically sign up for online courses, complete minimal coursework to stay active long enough to qualify for financial aid, and then disappear once funds are disbursed.

    Scope and Impact

    The scale of the problem is significant and growing:

    • California community colleges flagged approximately 460,000 suspicious applications in a single year—nearly 20% of the total—resulting in more than $11 million in fraudulent aid disbursements.

    • The College of Southern Nevada reported losing $7.4 million to ghost student fraud in one semester.

    • At Century College in Minnesota, instructors discovered that roughly 15% of students in a single course were fake enrollees.

    • California’s overall community college system reported over $13 million in financial aid losses in a single year due to such schemes—a 74% increase from the previous year.

    The consequences extend beyond financial loss. Course seats are blocked from legitimate students. Faculty spend hours identifying and reporting ghost students. Institutional data becomes unreliable. Most importantly, public trust in higher education systems is eroded.

    Why Now?

    Several developments have enabled this rise in fraud:

    1. The shift to online learning during the pandemic decreased opportunities for in-person identity verification.

    2. AI tools—such as large language models, AI voice generators, and synthetic video platforms—allow fraudsters to create highly convincing fake identities at scale.

    3. Open-access policies at many institutions, particularly community colleges, allow applications to be submitted with minimal verification.

    4. Budget cuts and staff shortages have left many colleges without the resources to identify and remove fake students in a timely manner.

    How Institutions Are Responding

    Colleges and universities are implementing multiple strategies to fight back:

    Identity Verification Tools

    Some institutions now require government-issued IDs matched with biometric verification—such as real-time selfies with liveness detection—to confirm applicants’ identities.

    Faculty-Led Screening

    Instructors are being encouraged to require early student engagement via Zoom, video introductions, or synchronous activities to confirm that enrolled students are real individuals.

    Policy and Federal Support

    The U.S. Department of Education will soon require live ID verification for flagged FAFSA applicants. Some states, such as California, are considering application fees or more robust identity checks at the enrollment stage.

    AI-Driven Pattern Detection

    Tools like LightLeap.AI and ID.me are helping institutions track unusual behaviors such as duplicate IP addresses, linguistic patterns, and inconsistent documentation to detect fraud attempts.

    Recommendations for HEIs

    To mitigate the risk of ghost student infiltration, higher education institutions should:

    • Implement digital identity verification systems before enrollment or aid disbursement.

    • Train faculty and staff to recognize and report suspicious activity early in the semester.

    • Deploy AI tools to detect patterns in application and login data.

    • Foster collaboration across institutions to share data on emerging fraud trends.

    • Communicate transparently with students about new verification procedures and the reasons behind them.

    Why It Matters

    Ghost student fraud is more than a financial threat—it is a systemic risk to educational access, operational efficiency, and institutional credibility. With AI-enabled fraud growing in sophistication, higher education must act decisively to safeguard the integrity of enrollment, instruction, and student support systems.


    Sources

    Source link