Join HEPI and Advance HE for a webinar today (Tuesday, 13 January 2026) from 11am to 12pm, exploring what higher education can learn from leadership approaches in other sectors. Sign up here to hear this and more from our speakers.
This blog was kindly authored by Wioletta Nawrot, Associate Professor and Teaching & Learning Lead at ESCP Business School, London Campus.
Generative AI has entered higher education faster than most institutions can respond. The question is no longer whether students and staff will use it, but whether universities can ensure it strengthens learning rather than weakens it. Used well, AI can support personalised feedback, stimulate creativity, and free academic time for deeper dialogue. Used poorly, it can erode critical thinking, distort assessment, and undermine trust.
The difference lies not in the tools themselves but in how institutions guide their use through pedagogy, governance, and culture.
AI is a cultural and pedagogical shift, not a software upgrade
Across higher education, early responses to AI have often focused on tools. Yet treating AI as a bolt-on risks missing the real transformation: a shift in how academic communities think, learn, and make judgements.
Some universities began with communities of practice rather than software procurement. At ESCP Business School, stakeholders, including staff and students, were invited to experiment with AI in teaching, assessment, and student support. These experiences demonstrated that experimentation is essential but only when it contributes to a coherent framework with shared principles and staff development.
Three lessons have emerged as AI rollouts have been deployed. Staff report using AI to draft feedback or generate case study variations, but final decisions and marking remain human. Students learn more when they critique AI, not copy it. Exercises where students compare AI responses to academic sources or highlight errors can strengthen critical thinking. Governance matters more than enthusiasm. Clarity around data privacy, authorship, assessment and acceptable use is essential to protect trust.
Assessment: the hardest and most urgent area of reform
Once students can generate fluent essays or code in seconds, traditional take-home assignments are no longer reliable indicators of learning. At ESCP we have responded by:
- Introducing oral assessments, in-class writing, and step-by-step submissions to verify individual understanding.
- Asking students to reference class materials and discussions, or unique datasets that AI tools cannot access.
- Updating assessment rubrics to prioritise analytical depth, originality, transparency of process, and intellectual engagement.
Students should be encouraged to state whether AI was used, how it contributed, and where its outputs were adapted or rejected. This mirrors professional practice by acknowledging assistance without outsourcing judgement. This shift moves universities from policing to encouraging by detecting misconduct and teaching responsible use.
AI literacy and academic inequality
AI does not benefit all students equally. Those with strong subject knowledge are better able to question AI’s inaccuracies; others may accept outputs uncritically.
Generic workshops alone are insufficient. AI literacy must be embedded within disciplines, for example, in law through case analysis; in business via ethical decision-making; and in science through data validation. Students can be taught not just how to use AI, but how to test it, challenge it, and cite it appropriately.
Staff development is equally important. Not all academics feel confident incorporating AI into feedback, supervision or assessments. Models such as AI champions, peer-led workshops, and campus coordinators can increase confidence and avoid digital divides between departments.
Policy implications for UK higher education
If AI adoption remains fragmented, the UK’s higher education sector risks inconsistency, inequity, and reputational damage. A strategic approach is needed at an institutional and a national level.
Universities should define the educational purpose of AI before adopting tools, and consider reforming assessments to remain robust. Structured professional development, opportunities for peer exchange, and open dialogue with students about what constitutes legitimate and responsible use will also support the effective integration of AI into the sector.
However, it’s not only institutions that need to take action. Policymakers and sector bodies should develop shared reference points for transparency and academic integrity. As a nation, we must invest in research into AI’s impact on learning outcomes and ensure quality frameworks reflect AI’s role in higher education processes, such as assessment and skills development.
The European Union Artificial Intelligence Act (Regulation (EU) 2024/1689) sets a prescriptive model for compliance in education. The UK’s principles-based approach gives universities flexibility, but this comes with accountability. Without shared standards, the sector risks inconsistent practice and erosion of public trust. A reduction in employability may also follow if students are not taught how to use AI ethically while continuing to develop their critical thinking and analytical skills.
Implications for the sector
The experience of institutions like ESCP Business School shows that the quality of teaching with AI depends less on the technology itself than on the judgement and educational purpose guiding its use.
Generative AI is already an integral part of students’ academic lives; higher education must now decide how to shape that reality. Institutions that approach AI through strategy, integrity, and shared responsibility will not only protect learning, but renew it, strengthening the human dimension that gives teaching its meaning.

