K-12 GenAI adoption rates have grown–but so have concerns
A new era for teachers as AI disrupts instruction
With AI coaching, a math platform helps students tackle tough concepts
For more news on GenAI, visit eSN’s AI in Education hub
Almost 3 in 5 K-12 educators (55 percent) have positive perceptions about GenAI, despite concerns and perceived risks in its adoption, according to updated data from Cengage Group’s “AI in Education” research series, which regularly evaluates AI’s impact on education.
More News from eSchool News
HVAC projects to improve indoor air quality. Tutoring programs for struggling students. Tuition support for young people who want to become teachers in their home communities.
Our school has built up its course offerings without having to add headcount. Along the way, we’ve also gained a reputation for having a wide selection of general and advanced courses for our growing student body.
When it comes to visual creativity, AI tools let students design posters, presentations, and digital artwork effortlessly. Students can turn their ideas into professional-quality visuals, sparking creativity and innovation.
Ensuring that girls feel supported and empowered in STEM from an early age can lead to more balanced workplaces, economic growth, and groundbreaking discoveries.
In my work with middle school students, I’ve seen how critical that period of development is to students’ future success. One area of focus in a middle schooler’s development is vocabulary acquisition.
For students, the mid-year stretch is a chance to assess their learning, refine their decision-making skills, and build momentum for the opportunities ahead.
Middle school marks the transition from late childhood to early adolescence. Developmental psychologist Erik Erikson describes the transition as a shift from the Industry vs. Inferiority stage into the Identity vs. Role Confusion stage.
Art has a unique power in the ESL classroom–a magic that bridges cultures, ignites imagination, and breathes life into language. For English Language Learners (ELLs), it’s more than an expressive outlet.
In the year 2025, no one should have to be convinced that protecting data privacy matters. For education institutions, it’s really that simple of a priority–and that complicated.
Teachers are superheroes. Every day, they rise to the challenge, pouring their hearts into shaping the future. They stay late to grade papers and show up early to tutor struggling students.
Our recent study (Zhou et al, 2025) surveyed 595 undergraduate students across the UK to examine the evolving digital divide across all forms of digital technologies. Although higher education is expected to narrow this divide and build students’ digital confidence, our findings revealed the opposite. We found that the gap in digital confidence and skills between widening participation (WP) and non-WP students widened progressively throughout the undergraduate journey. While students reported peak confidence in Year 2, this was followed by a notable decline in Year 3, when the digital divide became most pronounced. This drop coincides with a critical period when students begin applying their digital skills in real-world contexts, such as job applications and final-year projects.
Based on our study (Zhou et al, 2025), while universities offer a wide range of support such as laptop loans, free access to remote systems, extracurricular digital skills training, and targeted funding to WP students, WP students often do not make use of these resources. The core issue lies not in the absence of support, but in its uptake. WP students are often excluded from the peer networks and digital communities where emerging technologies are introduced, shared, and discussed. From a Connectivist perspective (Siemens, 2005), this lack of connection to digital, social, and institutional networks limits their awareness, confidence, and ability to engage meaningfully with available digital tools.
Building on these findings, this blog asks a timely question: as Generative Artificial Intelligence (GenAI) becomes embedded in higher education, will it help bridge this divide or deepen it further?
GenAI may widen the digital divide — without proper strategies
While the digital divide in higher education is already well-documented in relation to general technologies, the emergence of GenAI introduces new risks that may further widen this gap (Cachat-Rosset & Klarsfeld, 2023). This matters because students who are GenAI-literate often experience better academic performance (Sun & Zhou, 2024), making the divide not just about access but also about academic outcomes.
Unlike traditional digital tools, GenAI often demands more advanced infrastructure — including powerful devices, high-speed internet, and in many cases, paid subscriptions to unlock full functionality. WP students, who already face barriers to accessing basic digital infrastructure, are likely to be disproportionately excluded. This divide is not only student-level but also institutional. A few well-funded universities are able to subscribe to GenAI platforms such as ChatGPT, invest in specialised GenAI tools, and secure campus-wide licenses. In contrast, many institutions, particularly those under financial pressure, cannot afford such investments. These disparities risk creating a new cross-sector digital divide, where students’ access to emerging technologies depends not only on their background, but also on the resources of the university they attend.
In addition, the adoption of GenAI currently occurs primarily through informal channels via peers, online communities, or individual experimentation rather than structured teaching (Shailendra et al, 2024). WP students, who may lack access to these digital and social learning networks (Krstić et al, 2021), are therefore less likely to become aware of new GenAI tools, let alone develop the confidence and skills to use them effectively. Even when they do engage with GenAI, students may experience uncertainty, confusion, or fear about using it appropriately especially in the absence of clear guidance around academic integrity, ethical use, or institutional policy. This ambiguity can lead to increased anxiety and stress, contributing to wider concerns around mental health in GenAI learning environments.
Another concern is the risk of impersonal learning environments (Berei & Pusztai, 2022). When GenAI are implemented without inclusive design, the experience can feel detached and isolating, particularly for WP students, who often already feel marginalised. While GenAI tools may streamline administrative and learning processes, they can also weaken the sense of connection and belonging that is essential for student engagement and success.
GenAI can narrow the divide — with the right strategies
Although WP students are often excluded from digital networks, which Connectivism highlights as essential for learning (Goldie, 2016), GenAI, if used thoughtfully, can help reconnect them by offering personalised support, reducing geographic barriers, and expanding access to educational resources.
To achieve this, we propose five key strategies:
Invest in infrastructure and access: Universities must ensure that all students have the tools to participate in the AI-enabled classroom including access to devices, core software, and free versions of widely used GenAI platforms. While there is a growing variety of GenAI tools on the market, institutions facing financial pressures must prioritise tools that are both widely used and demonstrably effective. The goal is not to adopt everything, but to ensure that all students have equitable access to the essentials.
Rethink training with inclusion in mind: GenAI literacy training must go beyond traditional models. It should reflect Equality, Diversity and Inclusion principles recognising the different starting points students bring and offering flexible, practical formats. Micro-credentials on platforms like LinkedIn Learning or university-branded short courses can provide just-in-time, accessible learning opportunities. These resources are available anytime and from anywhere, enabling students who were previously excluded such as those in rural or under-resourced areas to access learning on their own terms.
Build digital communities and peer networks: Social connection is a key enabler of learning (Siemens, 2005). Institutions should foster GenAI learning communities where students can exchange ideas, offer peer support, and normalise experimentation. Mental readiness is just as important as technical skill and being part of a supportive network can reduce anxiety and stigma around GenAI use.
Design inclusive GenAI policies and ensure ongoing evaluation: Institutions must establish clear, inclusive policies around GenAI use that balance innovation with ethics (Schofield & Zhang, 2024). These policies should be communicated transparently and reviewed regularly, informed by diverse student feedback and ongoing evaluation of impact.
Adopt a human-centred approach to GenAI integration: Following UNESCO’s human-centred approach to AI in education (UNESCO, 2024; 2025), GenAI should be used to enhance, not replace the human elements of teaching and learning. While GenAI can support personalisation and reduce administrative burdens, the presence of academic and pastoral staff remains essential. By freeing staff from routine tasks, GenAI can enable them to focus more fully on this high-impact, relational work, such as mentoring, guidance, and personalised support that WP students often benefit from most.
Conclusion
Generative AI alone will not determine the future of equity in higher education, our actions will. Without intentional, inclusive strategies, GenAI risks amplifying existing digital inequalities, further disadvantaging WP students. However, by proactively addressing access barriers, delivering inclusive and flexible training, building supportive digital communities, embedding ethical policies, and preserving meaningful human interaction, GenAI can become a powerful tool for inclusion. The digital divide doesn’t close itself; institutions must embed equity into every stage of GenAI adoption. The time to act is not once systems are already in place, it is now.
Dr Lei Fang is a Senior Lecturer in Digital Transformation at Queen Mary University of London. Her research interests include AI literacy, digital technology adoption, the application of AI in higher education, and risk management. lei.fang@qmul.ac.uk
Professor Xue Zhou is a Professor in AI in Business Education at the University of Leicester. Her research interests fall in the areas of digital literacy, digital technology adoption, cross-cultural adjustment and online professionalism. xue.zhou@le.ac.uk
A defining quality of student-centred teaching is effective assessments. In recent years, the discourse around effective assessment has steered towards incorporating the “assessment for learning” strategy into curriculum development. In basic form, assessment for learning entails assessment where pre-emptive, future-oriented comments (called feedforward) are given to learners to guide improvement in future performances. This contrasts with the typical assessment arrangement where feedback is considered a product offered to the student in exchange for what they submit. However, two broad factors have historically impeded the adoption of assessment for learning with feedforward corrective actions in educational settings:
Increased workload and scalability: This approach demands creating varied, low-stakes formative assessment tasks with actionable feedforward recommendations that guide future learning. Besides, providing personalized corrective actions for each student in each low-stake test is both time and resource-intensive. Particularly, in large courses with many students, the additional burden on educators’ heavy workloads cannot be underestimated.
Student engagement and motivation: Students are reluctant to engage with formative assessment tasks if they perceive them as low-stakes with no incentive that aids their grades.
Overcoming these impediments is crucial to accelerating progress with assessment for learning. In this vein, the central idea of the assessment flywheel is to harness the power of GenAI to overcome the above challenges. In effect, this approach adds to the growing realisation of the power of GenAI for the transformation of assessments.
The Assessment Flywheel Model
Why assessment flywheel? In the field of engineering, which is my academic background, traditional methods such as lectures, tutorials, and laboratory sessions are the dominant content delivery vehicles. However, oftentimes, these activities are lecturer-led and time-bound. This thus limits students’ active knowledge consumption and self-regulated learning. Compounding the problem further when it comes time to assess our students’ learning, lecturers tend to focus predominantly on summative assessments, which are often followed by feedback of one kind or another. Notably, this prevalent approach of provisioning meaningful feedback solely on summative assessments often creates a dilemma between the traditional assessment of learning (which is backwards-looking) and the assessment for learning (which is forward-looking).
The discourse around how to incorporate assessment for learning strategy into curriculum development has permeated the literature on teaching and learning for the past several years. However, progress has been slow in adoption. Among the reasons for this is that assessment for learning demands constructing and assessing a diversity of low-stake formative assessment tasks suitable for providing “feedforward” comments that trigger corrective actions in students. In other words, the new paradigm imposes a significant burden on resource-constraint educators, especially for a large cohort of students. And this is where the idea of “assessment flywheel” comes in.
Essentially, the assessment flywheel is a GenAI-mediated assessment framework. It is a dynamic, self-reinforcing system that continuously gathers, analyzes, and applies student performance data to enhance learning outcomes. As depicted in the figure below, the flywheel model operates in a continuous cycle, and each iteration of tasks, learning and proactive feedforward comment builds momentum for deeper skill development. This contrasts with the traditional assessments that are static and episodic. More specifically, within the flywheel model, the GenAI plays a central role in generating personalized tasks, personalized feedback, and adapting learning materials. The goal is to create an automated loop where student learning accelerates over time through persistent, data-driven refinement. The model allows instructors to incorporate personalized exercises that revolve around a subject’s learning outcomes in low-stake tests. And with proper tailoring of the assessment flywheel model for low-stake tests, students will be able to instantly identify their current state of progress and get instant comments for corrective action from the GenAI-powered system.
An illustration of the assessment flywheel towards achieving the idea of “assessment for learning” using LLM/GenAI (adapted from the author’s recent publication).
Rolling Out GenAI-Driven Assessment Flywheel to Improve Learning
To be effective, one will need to take a few steps before a GenAI-driven assessment flywheel can be introduced within the academic learning spaces. First, the “assessment flywheel” system will need to be fed with learning contents of specific subjects from which question-answer pairs can be automatically generated. With proper tailoring of such a tool for low-stake tests, students will be able to instantly identify their current state of progress and get instant comments for corrective “feedforward” action from the GenAI-powered system. Second, a mechanism to minimize hallucinations (a common problem with GenAI) must be in place. Third, a structured approach that guides users of the system will need to be instituted with the rollout. For this latter point, a framework similar to Professor Gilly Salmon’s five-stage model for online learning is recommended.
Conclusion
Overall, GenAI holds concrete pedagogical prospects as a way of augmenting traditional teaching and assessment approaches to enhance gains in students’ learning. The discussed GenAI-powered assessment flywheel embodies the spirit of the military strategy of “Observe, Orient, Decide and Act” or OODA as is well-known. In the context of a personalized learning strategy, this GenAI-powered assessment flywheel approach holds the potential to contribute to deepening students’ accomplishment in competencies associated with learning objectives. It will also allow educators to provide assessments that facilitate authentic learning with minimal addition to their workload.
Dr. Khameel Mustapha holds a PhD in mechanical engineering from Nanyang Technological University (Singapore). He also holds two teaching qualifications – Postgraduate Certificate in Higher Education (PGCHE, Nottingham UK) and Graduate Certificate in Teaching and Learning (GCLT, Swinburne Australia). He is currently an Associate Professor with the Department of Mechanical Engineering, University of Nottingham (Malaysia campus). He is a Fellow of the Higher Education Academy (UK).
This HEPI blog was authored by Isabelle Bristow, Managing Director UK and Europe at Studiosity.
In a HEPI blog published almost a year ago, Student Voices on AI: Navigating Expectations and Opportunities, I reported the findings of global research Studiosity commissioned with YouGov on students’ attitudes towards artificial intelligence (AI). The intervening year would be considered a relatively small time period in a more regular higher education setting. However, given the rapid pace of change within the Gen-AI sphere, this one year is practically aeons.
We have recently commissioned a further YouGov survey to explore the motivations, emotions, and needs of over 2,200 students from 151 universities in the UK.
Below, I will cover the top five takeaways from this new round of research, but first, which students are using AI?
64% of all students have used AI tools to help with assignments or study tasks.
International student use (87%) is a staggering 27% higher than their domestic student counterparts (60%).
There’s a 21% difference between students who identify as female who said they have never used AI tools for study tasks (42%) compared with those identifying as male (21%).
Only 17% of students studying business said they have never used it, compared with 46% studying Humanities and Social Sciences.
The highest reported use is by students studying in London at 78%, and conversely, the highest non-use was reported by students studying in Scotland at 44%.
The Top Five Takeaways:
There is an 11% increase from last year in students thinking that their university is adapting fast enough to provide AI study support tools.
Following a year of global Gen-AI development and another year for institutions to adapt, students who believe their university is adjusting quickly enough remain in the minority this year at 47%, up from 36% in 2024. The remaining 53% of student respondents believe their institution has more to do.
When asked if they expect their university to offer AI support tools to students, the result is the same as last year – with 39% of students answering yes to this question. This was significantly higher for male students at 51% (up by 3% from last year) and for international students 61% (up by 4% from last year). Once again, this year, business students have the highest expectations at 58% (just 1% higher than last year). Following this, medicine (53%), nursing (48%) and STEM (46%) were more likely to respond ‘Yes’ when asked if they expect their university to provide AI tools.
Some students have concerns over academic integrity.
When asked if they felt their university should provide AI tools, students who answered’ no’ were given a free text box to explain their reasoning. Most of these responses related to academic integrity.
‘I don’t think unis support its use because it helps students plagiarise and cheat.’
‘I think AI beats the whole idea of a degree, but it can be used for grammar correction and general fluidity.’
‘Because it would be unfair and result in the student not really learning or thinking for themselves.’
Only 7% of students said they would use an AI tool for help with plagiarism or referencing (‘Ask my lecturer’ was at 30% and ‘Use a 24/7 university online writing feedback tool’ was at 21%).
Students who use AI regularly are less likely to rank ‘fear of failing’ as one of their top three study stresses
We asked all students – regardless of their AI use – of their top three reasons for feeling stressed about studying the responses were as follows:
61% of all UK students included ‘fear of failing’ in their top 3 reasons for feeling stressed about studying;
52% of all students included ‘balancing other commitments’; and
41% of all students included ‘preparing for exams and assessments’.
These statistics change when we filter by students who use AI tools to help with assignments or study tasks. Fear of failing is still the highest-ranked study stress. The percentage of respondents who rank fear of failing in their top three study stresses by AI use are as follows:
69% for those who never use AI;
62% for those who have used AI once or twice;
58% for those who have used AI a few times and;
50% for those who use AI regularly.
Looking at the main reasons students want to use the university’s AI service for support or feedback, this year, ‘confidence’ (25%) overtook ‘speed’ (16%). Female respondents, in particular, are using AI for reasons relating to confidence at 29%, compared to 20% for male students. International students valued ‘skills’ the most at 20%, significantly higher than their domestic student counterparts at 11%.
Students who feel like they belong are more likely to use AI.
We examined the correlation between students’ sense of belonging in their university community, and the amount they use AI tools to help with assignments or study tasks.
For students who feel like they belong, 67% said they have used AI tools to help with assignments or study tasks; this compares with 47% for students who do not feel like they belong.
5. Cognitive offloading (using technology to circumvent the ‘learning element’ of a task) is a top concern of academics and institutional leadership in 2025. However, student responses suggest they feel they are both learning and improving their skills when using generative tools.
When asked if they were confident they are learning as well as improving their own skills when using generative tools, students responded as follows:
12% ‘were extremely confident that they were learning and developing skills;
31% were very confident;
29% were moderately confident;
26% were moderately confident; and
Only 5% were not at all confident that this was true.
Conclusion:
Reflecting on the three years since Gen-AI’s disruptive entrance into the mainstream, the sector has now come to terms with the power, potential, and risks of Gen-AI. There is also a significantly better understanding of the importance of ensuring these tools enhance student learning rather than undermining it by offloading cognitive effort.
Leaders can look to a holistic approach to university-approved, trusted Gen-AI support, to improve student outcomes, experience and wellbeing.
—
You can download the full Annual Global Student Wellbeing Survey – UK report here.
Studiosity is a HEPI Partner. Studiosity is AI-for-Learning, not corrections – to scale student success, empower educators, and improve retention with a proven 4.4x ROI, while ensuring integrity and reducing institutional risk. Studiosity delivers ethical and formative feedback at scale to over 250 institutions worldwide. With unique AI-for-Learning technology, all students can benefit from formative feedback in minutes. From their first draft to just before submission, students receive personalised feedback – including guidance on how they can demonstrably improve their own work and critical thinking skills. Actionable insight is accessible to faculty and leaders, revealing the scale of engagement with support, cohorts requiring intervention, and measurable learning progress.
This week on the podcast UK Research and Innovation and the Office for Students both have new leadership – but what does that mean for the future of regulation, research funding, and sector confidence?
Meanwhile, a new report reveals a dramatic rise in student use of generative AI, and as speculation swirls over potential changes to post-study work visas, the sector braces for further uncertainty in international student recruitment.
With Mark Bennett, Director (Audience & Insight) at FindAUniversity, Sarah Cowan, Head of Policy (Higher Education and Research) at the British Academy, Michael Salmon, News Editor at Wonkhe, and presented by Mark Leach, Editor-in-Chief at Wonkhe.