Category: ethics

  • Here’s what happens when you start AI policy with values, not tools

    Here’s what happens when you start AI policy with values, not tools

    When artificial intelligence arrived in higher education with excessive speed, the instinct across the sector was largely the same: respond quickly, tighten regulations, issue warnings about academic misconduct, and invest in systems that promised to separate the “real” student voice from the synthetic one.

    Yet the faster these policies emerged, the more alike they became, ie reactive, tool-centred, and preoccupied with risk. In many places, AI entered the university not as an opportunity for reimagining teaching and learning, but as a problem to be contained.

    At our institution, we realised early on that following this path would not only fall short of our ambitions, but would also, in a way, betray our identity. As a university of social purpose, we hold ourselves to a set of commitments that run deeper than operational efficiency or regulatory compliance. These commitments around inclusion, sustainability, human dignity, equity, and transparent communication, shape everything from our curricula to our community partnerships. If we treated AI purely as a technical issue, we would be abandoning the very values that define who we are. So instead of asking “Which tools should we allow?” we began with a very different question: “What does our educational purpose require of us in an age shaped by AI?”

    The moment of clarity

    This question crystallised for us during UNESCO Digital Learning Week in September. Being in a room with educators, policymakers, and researchers from across the world made one thing uncomfortably clear: AI has the potential to exacerbate everything that is already inequitable in global education. As we listened to colleagues from countries where bandwidth is unstable, where institutions cannot afford commercial tools, where linguistic diversity is vast and historically marginalised, the conversation around AI looked very different. It was not about academic integrity or administrative efficiencies. It was about who gets left behind when AI becomes the new foundation of learning, and who is rendered invisible by systems trained overwhelmingly on Western, English-language, affluent-world data.

    These conversations were a turning point. They stripped away any illusion that a responsible AI policy for our institution could be written in isolation from questions of justice, geopolitics, or environmental reality. It became increasingly clear that our institutional response must not add to global divides. And also, if we were serious about leading in this space, then our policy must reflect that wider moral horizon. AI adoption cannot become another form of educational colonialism. It cannot widen the gap between institutions with resources and those without. And critically, it cannot silently reproduce the inequalities and biases of the datasets on which it is built.

    Returning from UNESCO, we began our formal policy work with a renewed sense of ethical responsibility. We decided that if AI forces higher education to rethink its foundations, then we must start with ours. This is what led us to anchor the policy explicitly in ecopedagogy, drawing directly on the full policy we finalised this year.

    Ecopedagogy, introduced to us earlier this year at the EDEN Conference, Bologna, in a paper by Wilson & Wardak gave us a framework capacious enough to hold together the concerns we heard at UNESCO: not only the digital divide, but the environmental cost of large language models; not only algorithmic bias, but the ways AI centralises epistemic authority in systems that reflect only a sliver of global knowledge; not only increased efficiency, but the human labour that is disguised or displaced in the process.

    Beginning with this lens transformed the whole of our policy-making process. Instead of producing a compliance document, we convened a large, cross-institutional working group of academics, professional services staff, digital specialists, and students. The size of the group was intentional: AI touches every part of university life including assessment, curriculum, wellbeing, data governance, procurement, widening participation, sustainability, and the future of work. No single expertise could speak for all of these concerns. Our working group therefore became a collective reflective space, where conversations ranged from carbon footprint concerns to the linguistic bias of chatbots, from students’ anxieties about misconduct allegations to the ethical implications of using tools trained on unconsented labour.

    From values to coherence

    What emerged was not a mash up of reactive rules, but a coherent narrative. We recognised that AI was already reshaping how students learn, collaborate, write, and express themselves. We saw its impact on staff workload and digital confidence. We understood its implications for students with limited access to devices or stable internet. And we grasped its repercussions for global equity: a student in London using the same tool as a student in Nairobi is not entering the interaction with the same bandwidth, the same cultural alignments, the same linguistic recognition, or the same environmental cost. UNESCO had made that brutally clear.

    Only when we had articulated this full landscape did the principles of our policy begin to fall into place. The first was deceptively simple: AI must remain in service of human learning, not the other way round. This principle, remarkably easy to say but difficult to live by became our anchor. It meant we could not treat automation as inherently desirable albeit inevitable. It meant we had to instead ask what forms of learning risked being hollowed out if we replaced them with generative tools. It meant we needed to develop students’ judgement, not just their proficiency.

    From here, our other principles fell into place in a domino effect. If we are committed to widening participation, then AI integration must be designed for inclusion and cultural responsiveness. If we are committed to sustainability, then adoption must be weighed against environmental impact. If we believe in transparency, then both staff and students must declare how they use AI. If we believe in academic integrity, then we must educate for integrity rather than policing for misconduct. And if we believe in preparing students for a rapidly evolving labour market, then AI literacy must become part of the curriculum rather than a bolt-on workshop.

    As we translated these principles into practice, the policy expanded into something far more ambitious than we initially anticipated. Curriculum design now includes discipline-specific AI literacy. Assessment practices will require explicit articulation of what AI can and cannot be used for, and why. We will build staff and student toolkits, design an AI champions network, and rather than producing a static rulebook, we are creating a living framework responsive to technological, pedagogical, and societal shifts.

    An act of self-definition

    In many ways, the most powerful lesson from UNESCO is that national-level conversations about AI in education are not enough. Universities do not stand alone. They are actors within a global ecosystem shaped by unequal access to infrastructure, uneven regulatory regimes, and differing cultural relationships with technology. Our policy therefore reflects not only our institutional values but a commitment to global responsibility. It is an attempt to lead in a way that does not deepen divides, but models what ethical, reflective, inclusive AI adoption can look like, even in a sector that often feels fiercely trapped between innovation and fear.

    If there is one message we would offer to the wider sector, it is this: the question is not how quickly institutions can produce AI policies, but what kind of stories those policies tell. A policy grounded in fear will produce defensive teaching. A policy grounded in tools will expire as fast as the tools themselves. But a policy grounded in values, shaped by global listening, ecological understanding, and educational purpose will help universities navigate uncertainty with integrity.

    What began for us as a technical challenge became, through UNESCO and through our collaborative internal process, a profound act of institutional self-definition. By rooting our policy in who we are, rather than in what AI can do, we found ourselves not reacting to disruption but shaping our stance within it. In doing so, we discovered that AI policy is not merely about technology; it is about the kind of educational future we want to co-create, at home and across the world. And that is why, for us, values were not just the right place to start, they were the only place.

    Source link

  • Which way do you lean?

    Which way do you lean?

    On November 26 dozens of articles written by News Decoder students will go to a panel of three judges as part of our twice-yearly storytelling competition. One of the criteria they will use to decide on the winners is this: Did the student report the story objectively, without bias? It is one of five criteria (another being total subjectivity on the part of each judge — sometimes a story is just a really great story).

    Here is the question: How does one define bias? You’d think I’d be able to answer this question easily, since I’ve written whole articles on objectivity, which is commonly thought of as the absence of bias. Webster’s Dictionary defines bias as an inclination of temperament or outlook, or an instance of such prejudice.

    Basically, you are for something or against something. A problem with trying to eliminate bias is often we don’t recognize when we lean more one way or another. If something is true it is true, right? How can truth be biased? But how many ridiculous arguments revolve around competing definitions of truth?

    News Decoder correspondent Enock Wanderema is an experienced journalist but he’s currently studying behavioral science. Two things he’s been thinking about are what is known as availability bias and confirmation bias.

    Availability bias is our tendency to rely on what we can remember. If we can remember it, it seems more important or more true. That leads to us raising importance stuff that recently happened since we remember it more easily.

    With confirmation bias, we tend to search for, interpret and remember information that confirms what we already believe and we overlook anything that contradicts those beliefs.

    “This happens automatically because constantly questioning everything we believe would be cognitively exhausting,” he wrote. “It means we can become trapped in false beliefs even when contradictory evidence accumulates and this matters enormously in contexts that are complex, novel, abstract or ideologically loaded; exactly the kinds of situations modern life presents constantly, but which were rare in ancestral environments.”

    Bias in journalism

    This becomes more problematic when we talk about journalists. “Journalists are the primary gatekeepers of information about complex issues people cannot directly experience but journalists are humans with the same biases,” Wanderema said.

    These biases come into play with the stories reporters or news organizations choose to cover or not cover. They inadvertently rely on what they remember and are familiar with when deciding if something is important enough to cover and deciding the events and people to ignore.

    This can lead to whole populations of people made invisible and important events ignored. If something has been happening and no one has covered it, how important can it be?

    News Decoder Correspondent Paul Sochaczewski struggles with the idea of bias not only with news stories but in writing non-fiction biographies of people long dead. “All journalism has bias,” he wrote. “Point of view, word choice, selection of details, who to quote and accuracy of that quote and so on.”

    In a 300-page book you can’t tell someone’s whole life story, but in summing up the life it is the biographer who decides what events are important and which ones paint the most accurate portrait of a person. It is the biographer who decides what to leave out.

    A picture of reality

    In some ways bias in storytelling is like the decisions a photographer makes in taking a photo. How many photos taken of me made me look awful? And yet there were a few that made me look better than I generally do. It had to do with the lighting available at the time and the photographer’s desire to make me look good.

    The photographer isn’t making anything up but by adjusting where I stand, what’s around me, how my hair falls — and having the sun on my face the right way, she can change my look from an old hag who just woke up in a terrible mood to a beautiful person in the prime of her life.

    News Decoder Correspondent Barry Moody says that you show bias when you lean towards one side or the other, either in the way you present the information or in giving more space to one side of an argument. Instead, you should present the facts and let your readers decide whether they have an opinion. “But don’t allow your own, either consciously or subconsciously to intrude,” he said.

    Kirby Moss, a professor of journalism and mass communication at the California Polytechnic University Humboldt in California sees bias as the inability or lack of awareness to critique your own perspective.

    That goes back to the notion of objectivity being the absence of bias. It is difficult to eliminate our own bias if we don’t recognize it in the first place.

    Wanderema said that our biases are often mental shortcuts that allow us to process the too much information we are constantly bombarded with, most of it from media rather than from direct experience. We pay attention to some things but not others. We are skeptical of some facts but easily accept others.

    “The result is a complex feedback loop where journalists’ biases shape coverage, coverage triggers audience biases, audience preferences reinforce journalistic practices and the entire system systematically distorts public understanding of reality,” Wanderema said. “Not through deliberate deception, but through the predictable operation of cognitive shortcuts that evolved to help humans navigate immediate physical environments.”

    Personally, in addressing the thorny problem of bias, I rely on what I have long decided should be the first rule of journalism: honesty. When reporting, I try to lay out facts as I’ve discovered them, after making a genuine effort to explore different perspectives and sides. But as Moss explained, it is important that I explore my own perspective so that I can then fess up to readers my own biases and conclusions. This lets them know where I stand so that they can accept or reject the conclusions I’ve made.

    In trying to eliminate our biases, we end up deceiving not just our readers, but ourselves.


    Questions to consider:

    1. What is confirmation bias?

    2. In what ways can personal bias affect what stories you choose to tell?

    3. In what ways do you think that you are biased?

    Source link