I cut my academic teeth working in initial teacher education.
I spent a lot of that time on a teeny-tiny chair at the back of a primary school classroom, watching a familiar pattern play out. The trainee would bring the class in, ask them to sit on the carpet, and begin the work of getting everybody “ready” for learning. Legs crossed. Hands still. Eyes on me. Voices off. Only when the performance of order matched the picture in their head would they let themselves begin the lesson.
Ten minutes later everyone was miserable. The children were fidgety (fair, I think, if you are seven and asked to sit on a cold floor at nine in the morning). The trainee was terrified that their grip on the room had kept slipping (also fair when you are 19 and asked to corral a room of 30 seven-year-olds). The time available for the meticulously planned activity that was to follow was eroded, and the room had become fraught and oppositional.
When we unpacked it afterwards, my question would always be the same: who, actually, had been in the way of learning? It is a question that I keep thinking about when I look at the sector’s response to generative AI.
Tightening the screws on the wrong machine
There is a tendency for educating to drift into policing. The performance of a particular kind of order becomes The Thing, with learning something that might happen afterwards. It’s a reflex that shows up at every level, from pre-school through postgrad – and at every granularity, from classroom routines to institutional policy. Generative AI has really triggered that impulse, and it is felt most sharply in our most existential policing device: assessment.
Three things tend to happen in response. One has been to retreat to the exam hall. Bring students back under invigilated conditions, with a reassuring silence but for the scribble of pen on paper, and you can relax a little that what is recorded is the students’ own work. The price, of course, is in stripping out all the situated messiness that makes learning real and contextual.
A second has been to pull in exactly the opposite direction. Ask students to produce things that look more like authentic real-world artefacts. This is a move that was needed long before AI, and it really, really matters if we want graduates whose work is relevant to, and whose thinking able to engage the complexity of, the world beyond our ivory towers. But of all the justifications of the shift, defence against AI is the least-best. It worked to begin with, but LLMs have evolved rapidly, an have the means to generate these “safely authentic” outputs with competence.
The third response has been a kind of meta solution. We accept that students will use AI but ask them to tell us how. We add a box on the submission for an AI-use statement, stick in an appendix of prompts and make transparency a critical virtue.
There is logic to all these responses, of course – and their combined use (often within a programme) reflects some of the nuance by which we approach the challenge. The trouble is, I think we may be responding to the wrong burning platform. AI is not only a new way to cheat. It is part of a more fundamental shift in the rules of “knowledge work.” It is reconfiguring what it is to “know” and how that becomes applicable.
When we focus on assessment first, we use it as a kind of border guard to that challenge; we’ll win a few skirmishes, achieve some short term stabilisation, but miss the battle that matters – the one about the primacy of human contributions amongst the machines. The one about whether, to borrow architectural parlance from AI design, we are educating for the human in loop, or to situate them as the owner and lead.
The mechanisation of graduateness
There is a timeless urgency to this transformation. Universities lift human capability. If we don’t keep asking what that capability is, and whether it has changed, we will drift into thin, performative versions of it. In some places, we already are. There is also a more immediate urgency, because this is landing in an era already distrustful of higher education’s value. Popular culture is sceptical. Ministers, parents, and students themselves ask, rightly or wrongly, for firmer assurance of the economic return on a very significant investment of time and money.
Generative AI is unhelpfully reconfiguring the kinds of cognitive labour on which our economic arguments that respond to this rely. For most of the modern history of universities, the core moves of knowledge work were tightly bound to human effort: writing, calculating, composing, designing, synthesising, modelling. We nurtured these “higher order” capabilities because nothing else could do them at scale.
That modern history, and the priorities it produced, were themselves shaped by an earlier economic reconfiguration. Mass higher education flourished, in part, because industrialisation created new kinds of work that needed new cognitive and professional functions. More complex economies needed people who could analyse, plan, model and manage, and universities expanded into that space. Our curricula, our standards, and much of our sense of graduateness were built around the idea of a person who could carry out those intellectual functions with a certain level of independence and fluency.
Generative AI unsettles that picture. It can write plausible prose, summarise and re-express sources, spin out code, and work fluently in familiar academic and professional genres. It is already nibbling away at the routine tasks that often begin graduate careers: first drafts of contracts and letters, basic copy, desk research and summary papers, early prototypes and first-cut analyses. If a growing slice of early-career work becomes at least partly automatable, the role that mass higher education built itself to supply starts to look less straightforward.
Dwelling on the consequences of industrialisation is instructive, but not a tidy comfort. Its immediate effects were brutal. It hollowed out skilled roles and dislocated craftspeople and trades. It advanced at a speed that outstripped the formation of protections that made the new order liveable. Labour law, regulation, welfare, reconfigured professions and associated education arrived late, after a long period of disruption. And the settlement was never clean. We still live with environmental and ethical consequences that were treated as externalities for far too long.
In its capacity to shift who does the work, AI is a change of similar consequence. We have a duty not to repeat the lagged, haphazard transition of that era. Universities are one of the few institutions with the reach, credibility, and intellectual range to help society make sense of a shift like this, and to apply pressure for change at a responsible pace.
In this, our job is not to defend an established version of knowledge work. It is to remake it in ways that keep the human contribution primary – whilst keeping in the frame broader implications of new technologies. We need to accelerate the formation of new roles and practices and bring ethical and intellectual protections forward rather than letting them arrive late, organically and through counter-struggle.
In simple terms, we must not wait, respond and self-defend. We must help society imagine work in which “human plus AI” remains more valuable than AI on its own. That starts close to home: with what we choose to teach, what we choose to reward, and what we stop pretending is a proxy for human worth. It starts with an interrogation of what we see as “cognitive labour.”
Standards, value and the myth of the lone student
Recently in these pages I’ve had a little poke at classifications – beginning an arc of writing that plays with a tension between standards and value as organising principles of higher education. I am interested in how we can unsettle an anxious cultural default to the former, to make a more relaxed space for the latter. I’m also interested in how this can speak to a whole range of difficult challenges for the sector, beginning with AI.
Standards are important. They tell us whether students have cleared a bar. They matter because nobody wants a nurse, engineer or solicitor who has not reached a minimum level of competence. Standards also act as a shortcut for public trust. They let us say: we are rigorous. Our judgements are defensible; our awards mean the same thing over time. You can trust the system because it is stable and historic.
Value asks different questions. What can this person contribute by exercising their specific talents, in this specific context, with the tools and systems that surround them? What becomes possible because they are part of the work? In some ways, value is a better answer to questions of public trust. It makes our worth manifest, rather than proxied. It is more transparent and more direct. It speaks to contribution rather than pedigree, and it can be less elite in its assumptions about what good looks like, who gets to define it and who should receive that worth at face value. It also speaks better to the “human plus AI” conundrum.
The trouble is that our assessment and quality regimes are built for standards, and the culture around “rigour” often doubles down on that. On paper, assessment is how we evidence learning, even if we collapse it into a graded, normative form. In practice, it becomes how we police the standard. You can see it in the periodic furore over grade inflation, where the argument is rarely about learning and often about whether the bar has moved. And then assessment gets pressed into service for all sorts of tangential behaviours: rationing progression and opportunity, reassuring regulators and auditors, and, in a turn that still makes my skin crawl, compelling attendance and engagement.
That architecture misdirects our attention, which helps explain why our first instinct with AI is to tighten screws rather than rethink the challenge. It trains us to look for defensible proofs of individual performance at exactly the moment when individual performance is becoming a less honest proxy for capability.
And therein sits a deeper fiction: the myth of the lone student. One of the “standards” pacts of university assessment is a latent assumption of an atomised individual who produces work alone, because our final judgement of them will be similarly individualised. Any value created with others becomes pedagogically or administratively risky. It sits awkwardly with a tenet – established from Socrates, through Vygotsky and Bakhtin and on to Alexander and Mercer – that learning is necessarily dialogic. Universities do not simply credential individuals. They curate communities of learning and becoming that are inherently intersubjective; and the knowing that happens does so in the places of interplay, not in some internal sealed box.
You can see the tension between assessment fiction and that reality in how we handle group work. We worry about free riders, and students feel that too. We invent elaborate devices to carve a shared project back into defensible individual marks. We down-weight collaborative assessment because it is hard to justify at an exam board. Quality processes struggle to see, let alone reward, value created between people rather than by each one in isolation.
That lone-student fiction collides head-on with generative AI. When your standards presume solitary work, any use of generative tools becomes a threat. So, we tighten the conditions – through secure environments, “cheat proof” artefacts or a plea for honesty. We buttress assessments, and somewhere along the way, they stop being primarily about what students can do and becomes preoccupied with what they might be getting away with.
That shift matters, because policing and educating pull in different directions – and we have become caught in the same trap of my trainees, insisting on order rather than purpose. Policing is convergent. It narrows options, checks conformity, tests whether people have stayed inside the lines. Much of our standards machinery is built for that convergent task. But educating, at its best, is necessarily divergent. It opens possibilities, nurtures judgement, and asks what people can now go on to do.
And here is the crux. The challenge in front of us is divergent and a convergent response will always be the wrong shape. We are relying heavily on tools rooted in an old picture of individual competence at exactly the point when we should be helping students explore and assert what their value might look like in AI-shaped, collaborative knowledge work.
Abstinence and harm reduction
It would be blithe to suggest that concerns about generative AI are confined to assessment. There is plenty of debate about what happens to the student experience when large language models do the heavy lifting. Alongside that sits a growing anxiety, supported by emerging (though still ambiguous) empirical work, that these tools may soften cognitive capacities. If systems can plan, draft and polish, will students ever experience the productive struggle that comes with learning? If they lean on them too early or too often, do they lose something in their own development?
Some of this is nostalgia for a time when effort looked different. But much of it is a serious question about attention, fluency and independence that deserves a hearing even without a neuroscience flourish. One of the things a university education should offer is the experience of having one’s thinking stretched. Students are entitled to feel their judgement sharpening and their voice gaining depth and confidence.
Nobody wants that eroded. This includes students, who crave a more articulated sense of appropriateness whilst at the same time noticing the zero-sum game that happens when everybody else is using tools to their advantage, in a system that will ultimately organize them into a hierarchy. What emerges is a grey-economy of AI use, where students often have justificatory beliefs on their use, but these are hazy and under-founded.
Working out how to protect intellectual effort in this is complex. Attempting to create tool-free spaces can quickly become another retrograde instance of policing rather than educating: naïve about its own chances of success, and blind to the opportunity to design something more developmental.
Indulge me, if you will, in a further nostalgia. Much earlier in my career I worked in secondary schools delivering drugs and sex education under the steer of an inspirationally progressive head of department. We insisted on a safe-use approach rather than defaulting to abstinence. Not because we stopped caring about harm, but because we were honest about the limits of prohibition, and about the abundant evidence of its failure.
I think there is something to learn from that. Harm reduction starts from what is actually happening and asks how to minimise damage while increasing agency. Students are already using generative tools. Some uses clearly undermine learning. Others are closer to what highly educated parents or professional mentors have always done for their children: explaining tricky ideas from different angles, reading a draft and asking hard questions, coaching a student through the early messiness of a task.
If our only move is policing, we push all of that into the shadows. The most confident and well-resourced students will find ways to use AI (and their existing human equivalents) to advantage. The ones who are most worried about being caught will either stay away, or use the tools in the riskiest, least reflective ways.
An educational response looks different. It takes the worry about cognitive decline seriously by insisting students still get to do real thinking, but not every kind of thinking. We no longer expect people to memorise every phone number they will ever need. Some offloading is a reasonable part of living in a complex, technologically enabled society.
The real, and more interesting, question is which cognitive muscles we want students to exercise, and where we deliberately insist on effort.
Keeping thinking at the centre
Answering that question requires us to be clear about the conditions we are educating in and for. AI is now part of the background, and the starting point for learning is rarely a blank page. A first pass is cheap, fluent and always available. It will be part of our students’ learning hygiene whether we like it or not.
The risk is not only that students can get to an answer quickly. It is that the uncanny register of certainty that comes with it can wrong-foot. It reads like authority: coherent, well structured, often better phrased than we might manage. The temptation to stop there is powerful.
We can answer this. Universities have always tried to nurture students who do not stop at the first plausible account. We leave them less willing to take received wisdom at face value, more inclined to interrogate and unsettle, and less likely to bow to authority simply because it speaks confidently. We teach them to ask: what is the claim, what is the evidence, what is missing, what would change my mind? AI is a new source of ‘received’ knowledge. The question is whether students can keep that habit of intellectual resistance and independence when the output looks finished.
I wonder if there are two distinct jobs packed into this, and both merit overtly pedagogic responses in our classrooms. One is critical literacy: the ability to interrogate outputs, trace claims back to sources, notice what has been smoothed over, and spot what has been left out. The other is more developmental: the habit of not letting a fluent first pass steal your voice. Using the tool to get moving and then doing the work that makes the result genuinely yours. Strengthening the question, sharpening the argument, bringing in counter-positions, testing assumptions, and deciding what you think.
Reframed in this way, AI does not always have to be a shortcut. It can be used to apply pressure and challenge. It can generate an opposing view, ask for definitions, offer counterexamples, point out gaps, and keep pushing until a student’s claim becomes clearer and more defensible. It can also help students organise thoughts, surface tensions, and notice connections they had not yet made. The shortcut it provides to a first position could be reclaimed. It could fast forward us through transmissive processes and engineer more space to do more of the critical dialogic work that marks out higher order thinking.
If we are using the language of “atrophy” to describe what lazy use of AI can do, we might also name the alternative. Not protection from strain, but purposeful strain: a kind of cognitive hypertrophy, where capability develops because students are expected to take what the tool offers and to stretch themselves; to extend it, refine it, and stand behind it.
But we shouldn’t stop there. Because if this is a question of reclaiming the primacy of the human, we mustn’t reduce them solely to intellectualising processes.
Beyond cognition alone
There is more to human value in knowledge work than thinking with and against systems. What graduates bring is not only their ability to see past a generic summary. It is their judgement, their capacity to work with others, their willingness to stay with a difficult problem, and their sense of responsibility to people who will live with the consequences of their decisions. It is also their imagination: the ability to picture outcomes that are not already implied by the first draft – and their boldness: the capacity to flirt at the outskirts of certainty and the complex boundaries of interdisciplinarity.
Those are still intellectual qualities, but they are entangled with other facets of being human. Care. Embodiment. Emotion. Situated experience. The kinds of insight that come from living a particular life, in a particular body, in particular communities. Higher education has too often pushed these to the margins. Perhaps, in an era where the robots are doing similar to those qualities we have held dear, we can no longer afford to minimise the fullness of human being.
These capacities matter. They also shape what augmented cognition is used for. Two people can take the same AI-generated summary and go in quite different directions, depending on what and whom they care about, what histories and harms they recognise, and what they think a good outcome looks like.
Tools do not supply those commitments – only people can. This is why ‘human value’ can’t be reduced to better prompting or sharper critique. Care, responsibility, and imagination are what decide where judgement is directed, what risks get taken seriously, and what counts as a good outcome. Without that, augmented cognition becomes technically impressive and socially careless. Framed this way, humanness is not an optional add-on to the AI debate. It is the thing that gives the thinking its purpose.
What universities are for, this time round
My trainees were not wrong for wanting to arrive at order in their classrooms, they were acutely aware of their responsibility to create an environment conducive to learning. But when they confused the means with its end they inadvertently constructed a barrier of their own.
Universities are not wrong for wanting to protect their standards; to ensure that our judgements of students are defensible, and that the developmental processes of stretch and challenge remain authentic in their experience. But an impulse to do this through policing and prohibition offers the wrong response, one destined to fail and that fails to fully lean into teachable moments.
Our task (and I write this not as an evangelist for AI) is not to defend against the machines, but to lift the humans – to reassert the critical value of their capabilities, and to reorient our pedagogies (and consequent assessment practices) to more overtly reclaim the kinds of productive strains, and responses to authority, that universities have always sought. In doing so, we can produce a new version of graduateness that has demonstrable value economically, culturally and socially.

