Transparency about AI should be a sector-wide principle

Transparency about AI should be a sector-wide principle

Josh Thorpe’s recent Wonkhe article – What should the higher education sector do about AI fatigue? – captured how many are feeling about artificial intelligence. The sector is tired of hype, uncertainty, and trying to keep up with a technology that seems to evolve faster than our capacity to respond. But AI fatigue, as that article suggests, is not failure. It’s a signal that we need to pause, reflect, and respond with human-centred coherence.

One of the most accessible and powerful responses to AI uncertainty is transparency. By taking a consistent approach to declaring AI use in work or assessment, many educators are taking a first step by simply declaring: “no AI is used in…”

The statement alone supports development of trust between educators and students, and it can create space for dialogue. It’s not about rushing into AI adoption, it’s about being honest and intentional, whatever the current practice. From there, we can begin to explore what small, discipline-relevant, and appropriate uses of AI might look like.

The journey starts with transparency, not technology. We need to support staff in engaging with AI in ways that feel ethical, manageable, and empowering. We mustn’t begin with technical training or institutional mandates. We must begin with a simple request to communicate clearly about AI use (or non-use) in our teaching, learning and assessment practices.

Sheffield Hallam University has implemented an AI Transparency Scale as a communication tool that helps educators consider how they disclose AI use to students, and supports how they can clarify expectations for students in assessment. It’s a conversation starter which prompts educators to reflect on whether AI tools are used in their practice, how this use is communicated with students, and how transparency supports academic integrity and student trust. The scale is helping move from uncertainty to clarity. Not by simplifying AI, but by humanising and clarifying how we engage with it.

Moving to transparency

For educators wondering where to start, confident transparency begins with making AI clear and understandable within its specific context. Transparency builds trust and sets clear expectations for staff and students. A simple statement, even a neutral one such as “AI tools were not used in the development of this module,” provides clarity and signals openness. You might adopt a tool like the AI Transparency Scale where prompts can scaffold your communication of AI use or create your own local language. Even short discussions in course or programme team meetings can surface valuable insights and lead to shared practices. The goal is not just to disclose, but to create a shared understanding and practice.

Engaging students in the conversation about AI and inviting them to share how they are using AI tools helps educators understand emerging practices and co-create ethical boundaries. As Naima Rahman and Gunter Saunders noted in their Wonkhe article, students want AI integrated into their learning – but they want it to be fair, transparent, and ethical.

Listening and responding transparently reinforces trust. Together, explore questions such as: “what does responsible AI use look like in our subject area?” Consider where automation or analysis might add value, and where human judgment remains essential.

Transparency here means being explicit about why certain tasks should remain human-led and where AI might play a supportive role. Positioning students as co-leaders in these discussions builds a stronger, more transparent foundation for responsible AI use.

From individual burden to institutional strategy

Josh Thorpe’s article rightly calls out the lack of institutional coordination and fragmented AI discourse. The burden of response has fallen largely on individuals, with limited support from policy, leadership, or infrastructure.

To move forward, we need coherent institutional leadership that frames AI not just as a technical challenge, but as a support, pedagogical, and ethical challenge. By sharing our experiences, resources, and approaches openly, we can develop shared principles that can guide diverse practices across the sector. Finally, we need alignment with the changing nature of authorship, assessment, and professional competence in an AI-enabled world. Simon Sneddon goes into the need to prepare students for the world of (artificial intelligence-enabled) work in another recent Wonkhe article.

Transparency offers a bridge between policy and practice. It’s a principle that can be embedded in institutional guidance, supported through professional development, and aligned with sector-wide values.

As the Office for Students, Jisc, and other bodies continue to shape the AI landscape and how we navigate it, institutions must find ways to empower their staff, not just inform them. That means creating space for reflection, dialogue, and ethical experimentation.

Transparency alone will not solve the challenges of AI in education, but it is a good place to start. The sector can begin to move from fatigue to fluency, one transparent step at a time.

AI transparency statement: In developing this article, I used Microsoft Copilot to support the writing process. I provided original textual inputs, guided the reference of relevant existing materials, added additional sources, and critically reviewed and refined generated outputs to produce the final piece. This corresponds to level 3 of the AI Transparency Scale, indicating active human oversight, original content, and editorial control.

Source link