When artificial intelligence arrived in higher education with excessive speed, the instinct across the sector was largely the same: respond quickly, tighten regulations, issue warnings about academic misconduct, and invest in systems that promised to separate the “real” student voice from the synthetic one.
Yet the faster these policies emerged, the more alike they became, ie reactive, tool-centred, and preoccupied with risk. In many places, AI entered the university not as an opportunity for reimagining teaching and learning, but as a problem to be contained.
At our institution, we realised early on that following this path would not only fall short of our ambitions, but would also, in a way, betray our identity. As a university of social purpose, we hold ourselves to a set of commitments that run deeper than operational efficiency or regulatory compliance. These commitments around inclusion, sustainability, human dignity, equity, and transparent communication, shape everything from our curricula to our community partnerships. If we treated AI purely as a technical issue, we would be abandoning the very values that define who we are. So instead of asking “Which tools should we allow?” we began with a very different question: “What does our educational purpose require of us in an age shaped by AI?”
The moment of clarity
This question crystallised for us during UNESCO Digital Learning Week in September. Being in a room with educators, policymakers, and researchers from across the world made one thing uncomfortably clear: AI has the potential to exacerbate everything that is already inequitable in global education. As we listened to colleagues from countries where bandwidth is unstable, where institutions cannot afford commercial tools, where linguistic diversity is vast and historically marginalised, the conversation around AI looked very different. It was not about academic integrity or administrative efficiencies. It was about who gets left behind when AI becomes the new foundation of learning, and who is rendered invisible by systems trained overwhelmingly on Western, English-language, affluent-world data.
These conversations were a turning point. They stripped away any illusion that a responsible AI policy for our institution could be written in isolation from questions of justice, geopolitics, or environmental reality. It became increasingly clear that our institutional response must not add to global divides. And also, if we were serious about leading in this space, then our policy must reflect that wider moral horizon. AI adoption cannot become another form of educational colonialism. It cannot widen the gap between institutions with resources and those without. And critically, it cannot silently reproduce the inequalities and biases of the datasets on which it is built.
Returning from UNESCO, we began our formal policy work with a renewed sense of ethical responsibility. We decided that if AI forces higher education to rethink its foundations, then we must start with ours. This is what led us to anchor the policy explicitly in ecopedagogy, drawing directly on the full policy we finalised this year.
Ecopedagogy, introduced to us earlier this year at the EDEN Conference, Bologna, in a paper by Wilson & Wardak gave us a framework capacious enough to hold together the concerns we heard at UNESCO: not only the digital divide, but the environmental cost of large language models; not only algorithmic bias, but the ways AI centralises epistemic authority in systems that reflect only a sliver of global knowledge; not only increased efficiency, but the human labour that is disguised or displaced in the process.
Beginning with this lens transformed the whole of our policy-making process. Instead of producing a compliance document, we convened a large, cross-institutional working group of academics, professional services staff, digital specialists, and students. The size of the group was intentional: AI touches every part of university life including assessment, curriculum, wellbeing, data governance, procurement, widening participation, sustainability, and the future of work. No single expertise could speak for all of these concerns. Our working group therefore became a collective reflective space, where conversations ranged from carbon footprint concerns to the linguistic bias of chatbots, from students’ anxieties about misconduct allegations to the ethical implications of using tools trained on unconsented labour.
From values to coherence
What emerged was not a mash up of reactive rules, but a coherent narrative. We recognised that AI was already reshaping how students learn, collaborate, write, and express themselves. We saw its impact on staff workload and digital confidence. We understood its implications for students with limited access to devices or stable internet. And we grasped its repercussions for global equity: a student in London using the same tool as a student in Nairobi is not entering the interaction with the same bandwidth, the same cultural alignments, the same linguistic recognition, or the same environmental cost. UNESCO had made that brutally clear.
Only when we had articulated this full landscape did the principles of our policy begin to fall into place. The first was deceptively simple: AI must remain in service of human learning, not the other way round. This principle, remarkably easy to say but difficult to live by became our anchor. It meant we could not treat automation as inherently desirable albeit inevitable. It meant we had to instead ask what forms of learning risked being hollowed out if we replaced them with generative tools. It meant we needed to develop students’ judgement, not just their proficiency.
From here, our other principles fell into place in a domino effect. If we are committed to widening participation, then AI integration must be designed for inclusion and cultural responsiveness. If we are committed to sustainability, then adoption must be weighed against environmental impact. If we believe in transparency, then both staff and students must declare how they use AI. If we believe in academic integrity, then we must educate for integrity rather than policing for misconduct. And if we believe in preparing students for a rapidly evolving labour market, then AI literacy must become part of the curriculum rather than a bolt-on workshop.
As we translated these principles into practice, the policy expanded into something far more ambitious than we initially anticipated. Curriculum design now includes discipline-specific AI literacy. Assessment practices will require explicit articulation of what AI can and cannot be used for, and why. We will build staff and student toolkits, design an AI champions network, and rather than producing a static rulebook, we are creating a living framework responsive to technological, pedagogical, and societal shifts.
An act of self-definition
In many ways, the most powerful lesson from UNESCO is that national-level conversations about AI in education are not enough. Universities do not stand alone. They are actors within a global ecosystem shaped by unequal access to infrastructure, uneven regulatory regimes, and differing cultural relationships with technology. Our policy therefore reflects not only our institutional values but a commitment to global responsibility. It is an attempt to lead in a way that does not deepen divides, but models what ethical, reflective, inclusive AI adoption can look like, even in a sector that often feels fiercely trapped between innovation and fear.
If there is one message we would offer to the wider sector, it is this: the question is not how quickly institutions can produce AI policies, but what kind of stories those policies tell. A policy grounded in fear will produce defensive teaching. A policy grounded in tools will expire as fast as the tools themselves. But a policy grounded in values, shaped by global listening, ecological understanding, and educational purpose will help universities navigate uncertainty with integrity.
What began for us as a technical challenge became, through UNESCO and through our collaborative internal process, a profound act of institutional self-definition. By rooting our policy in who we are, rather than in what AI can do, we found ourselves not reacting to disruption but shaping our stance within it. In doing so, we discovered that AI policy is not merely about technology; it is about the kind of educational future we want to co-create, at home and across the world. And that is why, for us, values were not just the right place to start, they were the only place.

