Category: Generative AI

  • How educators can use Gen AI to promote inclusion and widen access

    How educators can use Gen AI to promote inclusion and widen access

    by Eleni Meletiadou

    Introduction

    Higher education faces a pivotal moment as Generative AI becomes increasingly embedded within academic practice. While AI technologies offer the potential to personalize learning, streamline processes, and expand access, they also risk exacerbating existing inequalities if not intentionally aligned with inclusive values. Building on our QAA-funded project outputs, this blog outlines a strategic framework for deploying AI to foster inclusion, equity, and ethical responsibility in higher education.

    The digital divide and GenAI

    Extensive research shows that students from marginalized backgrounds often face barriers in accessing digital tools, digital literacy training, and peer networks essential for technological confidence. GenAI exacerbates this divide, demanding not only infrastructure (devices, subscriptions, internet access) but also critical AI literacy. According to previous research, students with higher AI competence outperform peers academically, deepening outcome disparities.

    However, the challenge is not merely technological; it is social and structural. WP (Widening Participation) students often remain outside informal digital learning communities where GenAI tools are introduced and shared. Without intervention, GenAI risks becoming a “hidden curriculum” advantage for already-privileged groups.

    A framework for inclusive GenAI adoption

    Our QAA-funded “Framework for Educators” proposes five interrelated principles to guide ethical, inclusive AI integration:

    • Understanding and Awareness Foundational AI literacy must be prioritized. Awareness campaigns showcasing real-world inclusive uses of AI (eg Otter.ai for students with hearing impairments) and tiered learning tracks from beginner to advanced levels ensure all students can access, understand, and critically engage with GenAI tools.
    • Inclusive Collaboration GenAI should be used to foster diverse collaboration, not reinforce existing hierarchies. Tools like Miro and DeepL can support multilingual and neurodiverse team interactions, while AI-powered task management (eg Notion AI) ensures equitable participation. Embedding AI-driven teamwork protocols into coursework can normalize inclusive digital collaboration.
    • Skill Development Higher-order cognitive skills must remain at the heart of AI use. Assignments that require evaluating AI outputs for bias, simulating ethical dilemmas, and creatively applying AI for social good nurture critical thinking, problem-solving, and ethical awareness.
    • Access to Resources Infrastructure equity is critical. Universities must provide free or subsidized access to key AI tools (eg Grammarly, ReadSpeaker), establish Digital Accessibility Centers, and proactively support economically disadvantaged students.
    • Ethical Responsibility Critical AI literacy must include an ethical dimension. Courses on AI ethics, student-led policy drafting workshops, and institutional AI Ethics Committees empower students to engage responsibly with AI technologies.

    Implementation strategies

    To operationalize the framework, a phased implementation plan is recommended:

    • Phase 1: Needs assessment and foundational AI workshops (0–3 months).
    • Phase 2: Pilot inclusive collaboration models and adaptive learning environments (3–9 months).
    • Phase 3: Scale successful practices, establish Ethics and Accessibility Hubs (9–24 months).

    Key success metrics include increased AI literacy rates, participation from underrepresented groups, enhanced group project equity, and demonstrated critical thinking skill growth.

    Discussion: opportunities and risks

    Without inclusive design, GenAI could deepen educational inequalities, as recent research warns. Students without access to GenAI resources or social capital will be disadvantaged both academically and professionally. Furthermore, impersonal AI-driven learning environments may weaken students’ sense of belonging, exacerbating mental health challenges.

    Conversely, intentional GenAI integration offers powerful opportunities. AI can personalize support for students with diverse learning needs, extend access to remote or rural learners, and reduce administrative burdens on staff – freeing them to focus on high-impact, relational work such as mentoring.

    Conclusion

    The future of inclusive higher education depends on whether GenAI is adopted with a clear commitment to equity and social justice. As our QAA project outputs demonstrate, the challenge is not merely technological but ethical and pedagogical. Institutions must move beyond access alone, embedding critical AI literacy, equitable resource distribution, community-building, and ethical responsibility into every stage of AI adoption.

    Generative AI will not close the digital divide on its own. It is our pedagogical choices, strategic designs, and values-driven implementations that will determine whether the AI-driven university of the future is one of exclusion – or transformation.

    This blog is based on the recent outputs from our QAA-funded project entitled: “Using AI to promote education for sustainable development and widen access to digital skills”

    Dr Eleni Meletiadou is an Associate Professor (Teaching) at London Metropolitan University  specialising in Equity, Diversity, and Inclusion (EDI), AI, inclusive digital pedagogy, and multilingual education. She leads the Education for Social Justice and Sustainable Learning and Development (RILEAS) and the Gender Equity, Diversity, and Inclusion (GEDI) Research Groups. Dr Meletiadou’s work, recognised with the British Academy of Management Education Practice Award (2023), focuses on transforming higher education curricula to promote equitable access, sustainability, and wellbeing. With over 15 years of international experience across 35 countries, she has led numerous projects in inclusive assessment and AI-enhanced learning. She is a Principal Fellow of the Higher Education Academy and serves on several editorial boards. Her research interests include organisational change, intercultural communication, gender equity, and Education for Sustainable Development (ESD). She actively contributes to global efforts in making education more inclusive and future-ready. LinkedIn: https://www.linkedin.com/in/dr-eleni-meletiadou/

    Author: SRHE News Blog

    An international learned society, concerned with supporting research and researchers into Higher Education

    Source link

  • What does it mean if students think that AI is more intelligent than they are?

    What does it mean if students think that AI is more intelligent than they are?

    The past couple of years in higher education have been dominated by discussions of generative AI – how to detect it, how to prevent cheating, how to adapt assessment. But we are missing something more fundamental.

    AI isn’t just changing how students approach their work – it’s changing how they see themselves. If universities fail to address this, they risk producing graduates who lack both the knowledge and the confidence to succeed in employment and society. Consequently, the value of a higher education degree will diminish.

    In November, a first-year student asked me if ChatGPT could write their assignment. When I said no, they replied: “But AI is more intelligent than me.” That comment has stayed with me ever since.

    If students no longer trust their own ability to contribute to discussions or produce work of value, the implications stretch far beyond academic misconduct. Confidence is affecting motivation, resilience and self-belief, which, consequently, effects sense of community, assessment grades, and graduate skills.

    I have noticed that few discussions focus on the deeper psychological shift – students’ changing perceptions of their own intelligence and capability. This change is a key antecedent for the erosion of a sense of community, AI use in learning and assessment, and the underdevelopment of graduate skills.

    The erosion of a sense of community

    In 2015 when I began teaching, I would walk into a seminar room and find students talking to one another about how worried they were for the deadline, how boring the lecture was, or how many drinks they had Wednesday night. Yes, they would sit at the back, not always do the pre-reading, and go quiet for the first few weeks when I asked a question – but they were always happy to talk to one another.

    Fast forward to 2025, campus feels empty, and students come into class and sit alone. Even final years who have been together for three years, may sit with a “friend” but not really say anything as they stare at phones. I have a final year student who is achieving first class grades, but admitted he has not been in the library once this academic year and he barely knows anyone to talk to. This may not seem like a big thing, but it illustrates the lack of community and relationships that are formed at university. It is well known that peer-to-peer relationships are one of the biggest influencers on attendance and engagement. So when students fail to form networks, it is unsurprising that motivation declines.

    While professional services, student union, and support staff are continuously offering ways to improve the community, at a time where students are working longer hours and through a cost of living, we cannot expect students to attend extracurricular academic or non-academic activities. Therefore, timetabled lectures and seminars need to be at the heart of building relationships.

    AI in learning and assessment

    While marking first-year marketing assignments – a subject I’ve taught across multiple universities for a decade – I noticed a clear shift. Typically, I expect a broad range of marks, but this year, students clustered at two extremes: either very high or alarmingly low. The feedback was strikingly similar: “too vague,” “too descriptive,” “missing taught content.”

    I knew some of these students were engaged and capable in class, yet their assignments told a different story. I kept returning to that student’s remark and realised: the students who normally land in the middle – your solid 2:2 and 2:1 cohort – had turned to AI. Not necessarily to cheat, but because they lacked confidence in their own ability. They believed AI could articulate their ideas better than they could.

    The rapid integration of AI into education isn’t just changing what students do – it’s changing what they believe they can do. If students don’t think they can write as well as a machine, how can we expect them to take intellectual risks, engage critically, or develop the resilience needed for the workplace?

    Right now, universities are at a crossroads. We can either design assessments as if nothing has changed, pivot back to closed-book exams to preserve “authentic” academic work, or restructure assessment to empower students, build confidence, and provide something of real value to both learners and employers. Only the third option moves higher education forward.

    Deakin University’s Phillip Dawson has recently argued that we must ensure assessment measures what we actually intend to assess. His point resonated with me.

    AI is here to stay, and it can enhance learning and productivity. Instead of treating it primarily as a threat or retreating to closed-book exams, we need to ask: what do we really need to assess? For years, we have moved away from exams because they don’t reflect real-world skills or accurately measure understanding. That reasoning still holds, but the assessment landscape is shifting again. Instead of focusing on how students write about knowledge, we should be assessing how they apply it.

    Underdevelopment of graduate skills

    If we don’t rethink pedagogy and assessment, we risk producing graduates who are highly skilled at facilitating AI rather than using it as a tool for deeper analysis, problem-solving, and creativity. Employers are already telling us they need graduates who can analyse and interpret data, think critically to solve problems, communicate effectively, show resilience and adaptability, demonstrate emotional intelligence, and work collaboratively.

    But students can’t develop these skills if they don’t believe in their own ability.

    Right now, students are using AI tools for most activities, including online searching, proof reading, answering questions, generating examples, and even writing reflective pieces. I am confident that if I asked first years to write a two-minute speech about why they came to university, the majority would use AI in some way. There is no space – or incentive – for them to illustrate their skill development.

    This semester, I trialled a small intervention after getting fed up with looking at heads down in laptops. I asked my final year students to put laptops and phones on the floor for the first two hours of a four-hour workshop.

    At first, they were visibly uncomfortable – some looked panicked, others bored. But after ten minutes, something changed. They wrote more, spoke more confidently, and showed greater creativity. As soon as they returned to technology, their expressions became blank again. This isn’t about banning AI, but about ensuring students have fun learning and have space to be thinkers, rather than facilitators.

    Confidence-building

    If students’ lack of confidence is driving them to rely on AI to “play it safe”, we need to acknowledge the systemic problem. Confidence is an academic issue. Confidence underpins everything in the student’s experience: classroom engagement, sense of belonging, motivation, resilience, critical thinking, and, of course, assessment quality. Universities know this, investing in mentorship schemes, support services, and initiatives to foster belonging. But confidence-building cannot be left to professional services alone – it must be embedded into curriculum design and assessment.

    Don’t get me wrong, I am fully aware of the pressures of academic staff, and telling them to improve sense of community, assessment, and graduate skills feels like another time-consuming task. Universities need to recognise that without improving workload planning models to allow academics freedom to focus on and explore pedagogic approaches, we fall into the trap of devaluing the degree.

    In addition, universities want to stay relevant, they need agile structures that allow academics to test new approaches and respond quickly, just like the “real world”. Academics should not be creating or modifying assessments today that won’t be implemented for another 18 months. Policies designed to ensure quality must also ensure adaptability. Otherwise, higher education will always be playing catch-up – first with AI, then with whatever comes next.

    Will universities continue producing AI-dependent graduates, or will they equip students with the confidence to lead in an AI-driven world?

    Source link

  • How our researchers are using AI – and what we can do to support them

    How our researchers are using AI – and what we can do to support them

    We know that the use of generative AI in research is now ubiquitous. But universities have limited understanding of who is using large language models in their research, how they are doing so, and what opportunities and risks this throws up.

    The University of Edinburgh hosts the UK’s first, and largest, group of AI expertise – so naturally, we wanted to find out how AI is being used. We asked our three colleges to check in on how their researchers were using generative AI, to inform what support we provide, and how.

    Using AI in research

    The most widespread use, as we would expect, was to support communication: editing, summarising and translating texts or multimedia. AI is helping many of our researchers to correct language, improve clarity and succinctness, and transpose text to new mediums including visualisations.

    Our researchers are increasingly using generative AI for retrieval: identifying, sourcing and classifying data of different kinds. This may involve using large language models to identify and compile datasets, bibliographies, or to carry out preliminary evidence syntheses or literature reviews.

    Many are also using AI to conduct data analysis for research. Often this involves developing protocols to analyse large data sets. It can also involve more open searches, with large language models detecting new correlations between variables, and using machine learning to refine their own protocols. AI can also test complex models or simulations (digital twins), or produce synthetic data. And it can produce new models or hypotheses for testing.

    AI is of course evolving fast, and we are seeing the emergence of more niche and discipline-specific tools. For example, self taught reasoning models (STaRs) can generate rationales that can be fine-tuned to answer a range of research questions. Or retrieval augmented generation (RAG) can enable large language models to access external data that enhances the breadth and accuracy of their outputs.

    Across these types of use, AI can improve communication and significantly save time. But it also poses significant risks, which our researchers were generally alert to. These involve well-known problems with accuracy, bias and confabulation – especially where researchers use AI to identify new (rather than test existing) patterns, to extrapolate, or to underpin decision-making. There are also clear risks around sharing of intellectual property with large language models. And not least, researchers need to clearly attribute the use of AI in their research outputs.

    The regulatory environment is also complex. While the UK does not as yet have formal AI legislation, many UK and international funders have adopted guidelines and rules. For example, the European Union has a new AI Act, and EU funded projects need to comply with European Commission guidelines on AI.

    Supporting responsible AI

    Our survey has given us a steer on how best to support and manage the use of AI in research – leading us to double down on four areas that require particular support:

    Training. Not surprisingly the use of generative AI is far more prevalent among early career researchers. This raises issues around training, supervision and oversight. Our early career researchers need mentoring and peer support. But more senior researchers don’t necessarily have the capacity to keep pace with the rapid evolution of AI applications.

    This suggests the need for flexible training opportunities. We have rolled out a range of courses, including three new basic AI courses to get researchers started in the responsible use of AI in research, and online courses on ethics of AI.

    We are also ensuring our researchers can share peer support. We have set up an AI Adoption Hub, and are developing communities of practice in key areas of AI research – notably research in AI and Health which is one of the most active areas of AI research. A similar initiative is being developed for AI and Sustainability.

    Data safety. Our researchers are rightly concerned about feeding their data into large language models, given complex challenges around copyright and attribution. For this reason, the university has established its own interface with the main open source large language models including ChatGPT – the Edinburgh Language Model (ELM). ELM provides safer access to large language model, operating under a “zero data retention” agreement so that data is not retained by Open AI. We are encouraging our researchers to develop their own application programming interfaces (APIs), which allow them to provide more specific instructions to enhance their results.

    Ethics. AI in research throws up a range of challenges around ethics and integrity. Our major project on responsible AI, BRAID, and ethics training by the Institute for Academic Development, provide expertise on how we adapt and apply our ethics processes to address the challenges. We also provide an AI Impact Assessment tool to help researchers work through the potential ethical and safety risks in using AI.

    Research culture. The use of AI is ushering in a major shift in how we conduct research, raising fundamental questions about research integrity. When used well, generative AI can make researchers more productive and effective, freeing time to focus on those aspects of research that require critical thinking and creativity. But they also create incentives to take short cuts that can compromise the rigour, accuracy and quality of research. For this reason, we need a laser focus on quality over quantity.

    Groundbreaking research is not done quickly, and the most successful researchers do not churn out large volumes of papers – the key is to take time to produce robust, rigorous and innovative research. This is a message that will be strongly built into our renewed 2026 Research Cultures Action Plan.

    AI is helping our researchers drive important advances that will benefit society and the environment. It is imperative that we tap the opportunities of AI, while avoiding some of the often imperceptible risks in its mis-use. To this end, we have decided to make AI a core part of our Research and Innovation Strategy – ensuring we have the right training, safety and ethical standards, and research culture to harness the opportunities of this exciting technology in an enabling and responsible way.

    Source link

  • Promoting AI-Enhanced Performance in the Online Classroom – Faculty Focus

    Promoting AI-Enhanced Performance in the Online Classroom – Faculty Focus

    Source link

  • Will GenAI narrow or widen the digital divide in higher education?

    Will GenAI narrow or widen the digital divide in higher education?

    by Lei Fang and Xue Zhou

    This blog is based on our recent publication: Zhou, X, Fang, L, & Rajaram, K (2025) ‘Exploring the digital divide among students of diverse demographic backgrounds: a survey of UK undergraduates’ Journal of Applied Learning and Teaching, 8(1).

    Introduction – the widening digital divide

    Our recent study (Zhou et al, 2025) surveyed 595 undergraduate students across the UK to examine the evolving digital divide across all forms of digital technologies. Although higher education is expected to narrow this divide and build students’ digital confidence, our findings revealed the opposite. We found that the gap in digital confidence and skills between widening participation (WP) and non-WP students widened progressively throughout the undergraduate journey. While students reported peak confidence in Year 2, this was followed by a notable decline in Year 3, when the digital divide became most pronounced. This drop coincides with a critical period when students begin applying their digital skills in real-world contexts, such as job applications and final-year projects.

    Based on our study (Zhou et al, 2025), while universities offer a wide range of support such as laptop loans, free access to remote systems, extracurricular digital skills training, and targeted funding to WP students, WP students often do not make use of these resources. The core issue lies not in the absence of support, but in its uptake. WP students are often excluded from the peer networks and digital communities where emerging technologies are introduced, shared, and discussed. From a Connectivist perspective (Siemens, 2005), this lack of connection to digital, social, and institutional networks limits their awareness, confidence, and ability to engage meaningfully with available digital tools.

    Building on these findings, this blog asks a timely question: as Generative Artificial Intelligence (GenAI) becomes embedded in higher education, will it help bridge this divide or deepen it further?

    GenAI may widen the digital divide — without proper strategies

    While the digital divide in higher education is already well-documented in relation to general technologies, the emergence of GenAI introduces new risks that may further widen this gap (Cachat-Rosset & Klarsfeld, 2023). This matters because students who are GenAI-literate often experience better academic performance (Sun & Zhou, 2024), making the divide not just about access but also about academic outcomes.

    Unlike traditional digital tools, GenAI often demands more advanced infrastructure — including powerful devices, high-speed internet, and in many cases, paid subscriptions to unlock full functionality. WP students, who already face barriers to accessing basic digital infrastructure, are likely to be disproportionately excluded. This divide is not only student-level but also institutional. A few well-funded universities are able to subscribe to GenAI platforms such as ChatGPT, invest in specialised GenAI tools, and secure campus-wide licenses. In contrast, many institutions, particularly those under financial pressure, cannot afford such investments. These disparities risk creating a new cross-sector digital divide, where students’ access to emerging technologies depends not only on their background, but also on the resources of the university they attend.

    In addition, the adoption of GenAI currently occurs primarily through informal channels via peers, online communities, or individual experimentation rather than structured teaching (Shailendra et al, 2024). WP students, who may lack access to these digital and social learning networks (Krstić et al, 2021), are therefore less likely to become aware of new GenAI tools, let alone develop the confidence and skills to use them effectively. Even when they do engage with GenAI, students may experience uncertainty, confusion, or fear about using it appropriately especially in the absence of clear guidance around academic integrity, ethical use, or institutional policy. This ambiguity can lead to increased anxiety and stress, contributing to wider concerns around mental health in GenAI learning environments.

    Another concern is the risk of impersonal learning environments (Berei & Pusztai, 2022). When GenAI are implemented without inclusive design, the experience can feel detached and isolating, particularly for WP students, who often already feel marginalised. While GenAI tools may streamline administrative and learning processes, they can also weaken the sense of connection and belonging that is essential for student engagement and success.

    GenAI can narrow the divide — with the right strategies

    Although WP students are often excluded from digital networks, which Connectivism highlights as essential for learning (Goldie, 2016), GenAI, if used thoughtfully, can help reconnect them by offering personalised support, reducing geographic barriers, and expanding access to educational resources.

    To achieve this, we propose five key strategies:

    • Invest in infrastructure and access: Universities must ensure that all students have the tools to participate in the AI-enabled classroom including access to devices, core software, and free versions of widely used GenAI platforms. While there is a growing variety of GenAI tools on the market, institutions facing financial pressures must prioritise tools that are both widely used and demonstrably effective. The goal is not to adopt everything, but to ensure that all students have equitable access to the essentials.
    • Rethink training with inclusion in mind: GenAI literacy training must go beyond traditional models. It should reflect Equality, Diversity and Inclusion principles recognising the different starting points students bring and offering flexible, practical formats. Micro-credentials on platforms like LinkedIn Learning or university-branded short courses can provide just-in-time, accessible learning opportunities. These resources are available anytime and from anywhere, enabling students who were previously excluded such as those in rural or under-resourced areas to access learning on their own terms.
    • Build digital communities and peer networks: Social connection is a key enabler of learning (Siemens, 2005). Institutions should foster GenAI learning communities where students can exchange ideas, offer peer support, and normalise experimentation. Mental readiness is just as important as technical skill and being part of a supportive network can reduce anxiety and stigma around GenAI use.
    • Design inclusive GenAI policies and ensure ongoing evaluation: Institutions must establish clear, inclusive policies around GenAI use that balance innovation with ethics (Schofield & Zhang, 2024). These policies should be communicated transparently and reviewed regularly, informed by diverse student feedback and ongoing evaluation of impact.
    • Adopt a human-centred approach to GenAI integration: Following UNESCO’s human-centred approach to AI in education (UNESCO, 2024; 2025), GenAI should be used to enhance, not replace the human elements of teaching and learning. While GenAI can support personalisation and reduce administrative burdens, the presence of academic and pastoral staff remains essential. By freeing staff from routine tasks, GenAI can enable them to focus more fully on this high-impact, relational work, such as mentoring, guidance, and personalised support that WP students often benefit from most.

    Conclusion

    Generative AI alone will not determine the future of equity in higher education, our actions will. Without intentional, inclusive strategies, GenAI risks amplifying existing digital inequalities, further disadvantaging WP students. However, by proactively addressing access barriers, delivering inclusive and flexible training, building supportive digital communities, embedding ethical policies, and preserving meaningful human interaction, GenAI can become a powerful tool for inclusion. The digital divide doesn’t close itself; institutions must embed equity into every stage of GenAI adoption. The time to act is not once systems are already in place, it is now.

    Dr Lei Fang is a Senior Lecturer in Digital Transformation at Queen Mary University of London. Her research interests include AI literacy, digital technology adoption, the application of AI in higher education, and risk management. lei.fang@qmul.ac.uk

    Professor Xue Zhou is a Professor in AI in Business Education at the University of Leicester. Her research interests fall in the areas of digital literacy, digital technology adoption, cross-cultural adjustment and online professionalism. xue.zhou@le.ac.uk

    Author: SRHE News Blog

    An international learned society, concerned with supporting research and researchers into Higher Education

    Source link

  • Supporting the Instructional Design Process: Stress-Testing Assignments with AI – Faculty Focus

    Supporting the Instructional Design Process: Stress-Testing Assignments with AI – Faculty Focus

    Source link

  • Supporting the Instructional Design Process: Stress-Testing Assignments with AI – Faculty Focus

    Supporting the Instructional Design Process: Stress-Testing Assignments with AI – Faculty Focus

    Source link

  • Reading, Writing, and Thinking in the Age of AI – Faculty Focus

    Reading, Writing, and Thinking in the Age of AI – Faculty Focus

    Source link

  • Using Generative AI to “Hack Time” for Implementing Real-World Projects – Faculty Focus

    Using Generative AI to “Hack Time” for Implementing Real-World Projects – Faculty Focus

    Source link