Tag: genAI

  • Framework for GenAI in Graduate Career Development (opinion)

    Framework for GenAI in Graduate Career Development (opinion)

    In Plato’s Phaedrus, King Thamus feared writing would make people forgetful and create the appearance of wisdom without true understanding. His concern was not merely about a new tool, but about a technology that would fundamentally transform how humans think, remember and communicate. Today, we face similar anxieties about generative AI. Like writing before it, generative AI is not just a tool but a transformative technology reshaping how we think, write and work.

    This transformation is particularly consequential in graduate education, where students develop professional competencies while managing competing demands, research deadlines, teaching responsibilities, caregiving obligations and often financial pressures. Generative AI’s appeal is clear; it promises to accelerate tasks that compete for limited time and cognitive resources. Graduate students report using ChatGPT and similar tools for professional development tasks, such as drafting cover letters, preparing for interviews and exploring career options, often without institutional guidance on effective and ethical use.

    Most AI policies focus on coursework and academic integrity; professional development contexts remain largely unaddressed. Faculty and career advisers need practical strategies for guiding students to use generative AI critically and effectively. This article proposes a four-stage framework—explore, build, connect, refine—for guiding students’ generative AI use in professional development.

    Professional Development in the AI Era

    Over the past decade, graduate education has invested significantly in career readiness through dedicated offices, individual development plans and co-curricular programming—for example, the Council of Graduate Schools’ PhD Career Pathways initiative involved 75 U.S. doctoral institutions building data-informed professional development, and the Graduate Career Consortium, representing graduate-focused career staff, grew from roughly 220 members in 2014 to 500-plus members across about 220 institutions by 2022.

    These investments reflect recognition that Ph.D. and master’s students pursue diverse career paths, with fewer than half of STEM Ph.D.s entering tenure-track positions immediately after graduation; the figure for humanities and social sciences also remains below 50 percent over all.

    We now face a different challenge: integrating a technology that touches every part of the knowledge economy. Generative AI adoption among graduate students has been swift and largely unsupervised: At Ohio State University, 48 percent of graduate students reported using ChatGPT in spring 2024. At the University of Maryland, 77 percent of students report using generative AI, and 35 percent use it routinely for academic work, with graduate students more likely than undergraduates to be routine users; among routine student users, 38 percent said they did so without instructor guidance.

    Some subskills, like mechanical formatting, will matter less in this landscape; higher-order capacities—framing problems, tailoring messages to audiences, exercising ethical discernment—will matter more. For example, in a 2025 National Association of Colleges and Employers survey, employers rank communication and critical thinking among the most important competencies for new hires, and in a 2024 LinkedIn report, communication was the most in-demand skill.

    Without structured guidance, students face conflicting messages: Some faculty ban AI use entirely, while others assume so-called digital natives will figure it out independently. This leaves students navigating an ethical and practical minefield with high stakes for their careers. A framework offers consistency and clear principles across advising contexts.

    We propose a four-stage framework that mirrors how professionals actually learn: explore, build, connect, refine. This approach adapts design thinking principles, the iterative cycle of prototyping and testing, to AI-augmented professional development. Students rapidly generate options with AI support, test them in low-stakes environments and refine based on feedback. While we use writing and communication examples throughout for clarity, this framework applies broadly to professional development.

    Explore: Map Possibilities and Surface Gaps

    Exploring begins by mapping career paths, fellowship opportunities and professional norms, then identifying gaps in skills or expectations. A graduate student can ask a generative AI chatbot to infer competencies from their lab work or course projects, then compare those skills to current job postings in their target sector to identify skills they need to develop. They can generate a matrix of fellowship opportunities in their field, including eligibility requirements, deadlines and required materials, and then validate every detail on official websites. They can ask AI to describe communication norms in target sectors, comparing the tone and structure of academic versus industry cover letters—not to memorize a script, but to understand audience expectations they will need to meet.

    Students should not, however, rely on AI-generated job descriptions or program requirements without verification, as the technology may conflate roles, misrepresent qualifications or cite outdated information and sources.

    Build: Learn Through Iterative Practice

    Building turns insight into artifacts and habits. With generative AI as a sounding board, students can experiment with different résumé architectures for the same goal, testing chronological versus skills-based formats or tailoring a CV for academic versus industry positions. They can generate detailed outlines for an individual development plan, breaking down abstract goals into concrete, time-bound actions. They can devise practice tasks that address specific growth areas, such as mock interview questions for teaching-intensive positions or practice pitches tailored to different funding audiences. The point is not to paste in AI text; it is to lower the barriers of uncertainty and blank-page intimidation, making it easier to start building while keeping authorship and evidence squarely in the student’s hands.

    Connect: Communicate and Network With Purpose

    Connecting focuses on communicating with real people. Here, generative AI can lower the stakes for high-pressure interactions. By asking a chatbot to act the part of various audience members, students can rehearse multiple versions of a tailored 60-second elevator pitch, such as for a recruiter at a career fair, a cross-disciplinary faculty member at a poster session or a community partner exploring collaboration. Generative AI can also simulate informational interviews if students prompt the system to ask follow-up questions or even refine user inputs.

    In addition, students can leverage generative AI to draft initial outreach notes to potential mentors that the students then personalize and fact-check. They can explore networking strategies for conferences or professional association events, identifying whom to approach and what questions to ask based on publicly available information about attendees’ work.

    Even just five years ago, completing this nonexhaustive list of networking tasks might have seemed an impossibility for graduate students with already crammed agendas. Generative AI, however, affords graduate students the opportunity to become adept networkers without sacrificing much time from research and scholarship. Crucially, generative AI creates a low-risk space to practice, while it is the student who ultimately supplies credibility and authentic voice. Generative AI cannot build genuine relationships, but it can help students prepare for the human interactions where relationships form.

    Refine: Test, Adapt and Verify

    Refining is where judgment becomes visible. Before submitting a fellowship essay, for example, a student can ask the generative AI chatbot to simulate likely reviewer critiques based on published evaluation criteria, then use that feedback to align revisions to scoring rubrics. They can A/B test two AI-generated narrative approaches from the build stage with trusted readers, advisers or peers to determine which is more compelling. Before a campus talk, they can ask the chatbot to identify jargon, unclear transitions or slides with excessive text, then revise for audience accessibility.

    In each case, verification and ownership are nonnegotiable: Students must check references, deadlines and factual claims against primary sources and ensure the final product reflects their authentic voice rather than generic AI prose. A student who submits an AI-refined essay without verification may cite outdated program requirements, misrepresent their own experience or include plausible-sounding but fabricated details, undermining credibility with reviewers and jeopardizing their application.

    Cultivate Expert Caution, Not Technical Proficiency

    The goal is not to train students as prompt engineers but to help them exercise expert caution. This means teaching students to ask: Does this AI-generated text reflect my actual experience? Can I defend every claim in an interview? Does this output sound like me, or like generic professional-speak? Does this align with my values and the impression I want to create? If someone asked, “Tell me more about that,” could I elaborate with specific details?

    Students should view AI as a thought partner for the early stages of professional development work: the brainstorming, the first-draft scaffolding, the low-stakes rehearsal. It cannot replace human judgment, authentic relationships or deep expertise. A generative AI tool can help a student draft three versions of an elevator pitch, but only a trusted adviser can tell them which version sounds most genuine. It can list networking strategies, but only actual humans can become meaningful professional connections.

    Conclusion

    Each graduate student brings unique aptitudes, challenges and starting points. First-generation students navigating unfamiliar professional cultures may use generative AI to explore networking norms and decode unstated expectations. International students can practice U.S. interview conventions and professional correspondence styles. Part-time students with limited campus access can get preliminary feedback before precious advising appointments. Students managing disabilities or mental health challenges can use generative AI to reduce the cognitive load of initial drafting, preserving energy for higher-order revision and relationship-building.

    Used critically and transparently, generative AI can help students at all starting points explore, build, connect and refine their professional paths, alongside faculty advisers and career development professionals—never replacing them, but providing just-in-time feedback and broader access to coaching-style support.

    The question is no longer whether generative AI belongs in professional development. The real question is whether we will guide students to use it thoughtfully or leave them to navigate it alone. The explore-build-connect-refine framework offers one path forward: a structured approach that develops both professional competency and critical judgment. We choose guidance.

    Ioannis Vasileios Chremos is program manager for professional development at the University of Michigan Medical School Office of Graduate and Postdoctoral Studies.

    William A. Repetto is a postdoctoral researcher in the Department of English and the research office at the University of Delaware.

    Source link

  • Online Course Gives College Students a Foundation on GenAI

    Online Course Gives College Students a Foundation on GenAI

    As more employers identify uses for generative artificial intelligence in the workplace, colleges are embedding tech skills into the curriculum to best prepare students for their careers.

    But identifying how and when to deliver that content has been a challenge, particularly given the varying perspectives different disciplines have on generative AI and when its use should be allowed. A June report from Tyton Partners found that 42 percent of students use generative AI tools at least weekly, and two-thirds of students use a singular generative AI tool like ChatGPT. A survey by Inside Higher Ed and Generation Lab found that 85 percent of students had used generative AI for coursework in the past year, most often for brainstorming or asking questions.

    The University of Mary Washington developed an asynchronous one-credit course to give all students enrolled this fall a baseline foundation of AI knowledge. The optional class, which was offered over the summer at no cost to students, introduced them to AI ethics, tools, copyright concerns and potential career impacts.

    The goal is to help students use the tools thoughtfully and intelligently, said Anand Rao, director of Mary Washington’s center for AI and the liberal arts. Initial results show most students learned something from the course, and they want more teaching on how AI applies to their majors and future careers.

    How it works: The course, IDIS 300: Introduction to AI, was offered to any new or returning UMW student to be completed any time between June and August. Students who opted in were added to a digital classroom with eight modules, each containing a short video, assigned readings, a discussion board and a quiz assignment. The class was for credit, graded as pass-fail, but didn’t fulfill any general education requirements.

    Course content ranged from how to use AI tools and prompt generative AI output to academic integrity, as well as professional development and how to critically evaluate AI responses.

    “I thought those were all really important as a starting point, and that still just scratches the surface,” Rao said.

    The course is not designed to make everyone an AI user, Rao said, “but I do want them to be able to speak thoughtfully and intelligently about the use of tools, the application of tools and when and how they make decisions in which they’ll be able to use those tools.”

    At the end of the course, students submitted a short paper analyzing an AI tool used in their field or discipline—its output, use cases and ways the tool could be improved.

    Rao developed most of the content, but he collaborated with campus stakeholders who could provide additional insight, such as the Honor Council, to lay out how AI use is articulated in the honor code.

    The impact: In total, the first class enrolled 249 students from a variety of majors and disciplines, or about 6 percent of the university’s total undergrad population. A significant number of the course enrollees were incoming freshmen. Eighty-eight percent of students passed the course, and most had positive feedback on the class content and structure.

    In postcourse surveys, 68 percent of participants indicated IDIS 300 should be a mandatory course or highly recommended for all students.

    “If you know nothing about AI, then this course is a great place to start,” said one junior, noting that the content builds from the basics to direct career applications.

    What’s next: Rao is exploring ways to scale the course in the future, including by developing intermediate or advanced classes or creating discipline-specific offerings. He’s also hoping to recruit additional instructors, because the course had some challenges given its large size, such as conducting meaningful exchanges on the discussion board.

    The center will continue to host educational and discussion-based events throughout the year to continue critical conversations regarding generative AI. The first debate, centered on AI and the environment, aims to evaluate whether AI’s impact will be a net positive or negative over the next decade, Rao said.

    The university is also considering ways to engage the wider campus community and those outside the institution with basic AI knowledge. IDIS 300 content will be made available to nonstudents this year as a Canvas page. Some teachers in the local school district said they’d like to teach the class as a dual-enrollment course in the future.

    Get more content like this directly to your inbox. Subscribe here.

    Source link

  • How To Teach With AI Transparency Statements – Faculty Focus

    How To Teach With AI Transparency Statements – Faculty Focus

    Source link

  • From Policing to Pedagogy: Navigating AI’s Transformative Power – Faculty Focus

    From Policing to Pedagogy: Navigating AI’s Transformative Power – Faculty Focus

    Source link

  • From Policing to Pedagogy: Navigating AI’s Transformative Power – Faculty Focus

    From Policing to Pedagogy: Navigating AI’s Transformative Power – Faculty Focus

    Source link

  • A Values-Based Approach to Using Gen AI – Faculty Focus

    A Values-Based Approach to Using Gen AI – Faculty Focus

    Source link

  • Data shows growing GenAI adoption in K-12

    Data shows growing GenAI adoption in K-12

    Key points:

    • K-12 GenAI adoption rates have grown–but so have concerns 
    • A new era for teachers as AI disrupts instruction
    • With AI coaching, a math platform helps students tackle tough concepts
    • For more news on GenAI, visit eSN’s AI in Education hub

    Almost 3 in 5 K-12 educators (55 percent) have positive perceptions about GenAI, despite concerns and perceived risks in its adoption, according to updated data from Cengage Group’s “AI in Education” research series, which regularly evaluates AI’s impact on education.  

    More News from eSchool News

    HVAC projects to improve indoor air quality. Tutoring programs for struggling students. Tuition support for young people who want to become teachers in their home communities.

    Our school has built up its course offerings without having to add headcount. Along the way, we’ve also gained a reputation for having a wide selection of general and advanced courses for our growing student body.

    When it comes to visual creativity, AI tools let students design posters, presentations, and digital artwork effortlessly. Students can turn their ideas into professional-quality visuals, sparking creativity and innovation.

    Ensuring that girls feel supported and empowered in STEM from an early age can lead to more balanced workplaces, economic growth, and groundbreaking discoveries.

    In my work with middle school students, I’ve seen how critical that period of development is to students’ future success. One area of focus in a middle schooler’s development is vocabulary acquisition.

    For students, the mid-year stretch is a chance to assess their learning, refine their decision-making skills, and build momentum for the opportunities ahead.

    Middle school marks the transition from late childhood to early adolescence. Developmental psychologist Erik Erikson describes the transition as a shift from the Industry vs. Inferiority stage into the Identity vs. Role Confusion stage.

    Art has a unique power in the ESL classroom–a magic that bridges cultures, ignites imagination, and breathes life into language. For English Language Learners (ELLs), it’s more than an expressive outlet.

    In the year 2025, no one should have to be convinced that protecting data privacy matters. For education institutions, it’s really that simple of a priority–and that complicated.

    Teachers are superheroes. Every day, they rise to the challenge, pouring their hearts into shaping the future. They stay late to grade papers and show up early to tutor struggling students.

    Want to share a great resource? Let us know at [email protected].

    Source link

  • Will GenAI narrow or widen the digital divide in higher education?

    Will GenAI narrow or widen the digital divide in higher education?

    by Lei Fang and Xue Zhou

    This blog is based on our recent publication: Zhou, X, Fang, L, & Rajaram, K (2025) ‘Exploring the digital divide among students of diverse demographic backgrounds: a survey of UK undergraduates’ Journal of Applied Learning and Teaching, 8(1).

    Introduction – the widening digital divide

    Our recent study (Zhou et al, 2025) surveyed 595 undergraduate students across the UK to examine the evolving digital divide across all forms of digital technologies. Although higher education is expected to narrow this divide and build students’ digital confidence, our findings revealed the opposite. We found that the gap in digital confidence and skills between widening participation (WP) and non-WP students widened progressively throughout the undergraduate journey. While students reported peak confidence in Year 2, this was followed by a notable decline in Year 3, when the digital divide became most pronounced. This drop coincides with a critical period when students begin applying their digital skills in real-world contexts, such as job applications and final-year projects.

    Based on our study (Zhou et al, 2025), while universities offer a wide range of support such as laptop loans, free access to remote systems, extracurricular digital skills training, and targeted funding to WP students, WP students often do not make use of these resources. The core issue lies not in the absence of support, but in its uptake. WP students are often excluded from the peer networks and digital communities where emerging technologies are introduced, shared, and discussed. From a Connectivist perspective (Siemens, 2005), this lack of connection to digital, social, and institutional networks limits their awareness, confidence, and ability to engage meaningfully with available digital tools.

    Building on these findings, this blog asks a timely question: as Generative Artificial Intelligence (GenAI) becomes embedded in higher education, will it help bridge this divide or deepen it further?

    GenAI may widen the digital divide — without proper strategies

    While the digital divide in higher education is already well-documented in relation to general technologies, the emergence of GenAI introduces new risks that may further widen this gap (Cachat-Rosset & Klarsfeld, 2023). This matters because students who are GenAI-literate often experience better academic performance (Sun & Zhou, 2024), making the divide not just about access but also about academic outcomes.

    Unlike traditional digital tools, GenAI often demands more advanced infrastructure — including powerful devices, high-speed internet, and in many cases, paid subscriptions to unlock full functionality. WP students, who already face barriers to accessing basic digital infrastructure, are likely to be disproportionately excluded. This divide is not only student-level but also institutional. A few well-funded universities are able to subscribe to GenAI platforms such as ChatGPT, invest in specialised GenAI tools, and secure campus-wide licenses. In contrast, many institutions, particularly those under financial pressure, cannot afford such investments. These disparities risk creating a new cross-sector digital divide, where students’ access to emerging technologies depends not only on their background, but also on the resources of the university they attend.

    In addition, the adoption of GenAI currently occurs primarily through informal channels via peers, online communities, or individual experimentation rather than structured teaching (Shailendra et al, 2024). WP students, who may lack access to these digital and social learning networks (Krstić et al, 2021), are therefore less likely to become aware of new GenAI tools, let alone develop the confidence and skills to use them effectively. Even when they do engage with GenAI, students may experience uncertainty, confusion, or fear about using it appropriately especially in the absence of clear guidance around academic integrity, ethical use, or institutional policy. This ambiguity can lead to increased anxiety and stress, contributing to wider concerns around mental health in GenAI learning environments.

    Another concern is the risk of impersonal learning environments (Berei & Pusztai, 2022). When GenAI are implemented without inclusive design, the experience can feel detached and isolating, particularly for WP students, who often already feel marginalised. While GenAI tools may streamline administrative and learning processes, they can also weaken the sense of connection and belonging that is essential for student engagement and success.

    GenAI can narrow the divide — with the right strategies

    Although WP students are often excluded from digital networks, which Connectivism highlights as essential for learning (Goldie, 2016), GenAI, if used thoughtfully, can help reconnect them by offering personalised support, reducing geographic barriers, and expanding access to educational resources.

    To achieve this, we propose five key strategies:

    • Invest in infrastructure and access: Universities must ensure that all students have the tools to participate in the AI-enabled classroom including access to devices, core software, and free versions of widely used GenAI platforms. While there is a growing variety of GenAI tools on the market, institutions facing financial pressures must prioritise tools that are both widely used and demonstrably effective. The goal is not to adopt everything, but to ensure that all students have equitable access to the essentials.
    • Rethink training with inclusion in mind: GenAI literacy training must go beyond traditional models. It should reflect Equality, Diversity and Inclusion principles recognising the different starting points students bring and offering flexible, practical formats. Micro-credentials on platforms like LinkedIn Learning or university-branded short courses can provide just-in-time, accessible learning opportunities. These resources are available anytime and from anywhere, enabling students who were previously excluded such as those in rural or under-resourced areas to access learning on their own terms.
    • Build digital communities and peer networks: Social connection is a key enabler of learning (Siemens, 2005). Institutions should foster GenAI learning communities where students can exchange ideas, offer peer support, and normalise experimentation. Mental readiness is just as important as technical skill and being part of a supportive network can reduce anxiety and stigma around GenAI use.
    • Design inclusive GenAI policies and ensure ongoing evaluation: Institutions must establish clear, inclusive policies around GenAI use that balance innovation with ethics (Schofield & Zhang, 2024). These policies should be communicated transparently and reviewed regularly, informed by diverse student feedback and ongoing evaluation of impact.
    • Adopt a human-centred approach to GenAI integration: Following UNESCO’s human-centred approach to AI in education (UNESCO, 2024; 2025), GenAI should be used to enhance, not replace the human elements of teaching and learning. While GenAI can support personalisation and reduce administrative burdens, the presence of academic and pastoral staff remains essential. By freeing staff from routine tasks, GenAI can enable them to focus more fully on this high-impact, relational work, such as mentoring, guidance, and personalised support that WP students often benefit from most.

    Conclusion

    Generative AI alone will not determine the future of equity in higher education, our actions will. Without intentional, inclusive strategies, GenAI risks amplifying existing digital inequalities, further disadvantaging WP students. However, by proactively addressing access barriers, delivering inclusive and flexible training, building supportive digital communities, embedding ethical policies, and preserving meaningful human interaction, GenAI can become a powerful tool for inclusion. The digital divide doesn’t close itself; institutions must embed equity into every stage of GenAI adoption. The time to act is not once systems are already in place, it is now.

    Dr Lei Fang is a Senior Lecturer in Digital Transformation at Queen Mary University of London. Her research interests include AI literacy, digital technology adoption, the application of AI in higher education, and risk management. [email protected]

    Professor Xue Zhou is a Professor in AI in Business Education at the University of Leicester. Her research interests fall in the areas of digital literacy, digital technology adoption, cross-cultural adjustment and online professionalism. [email protected]

    Author: SRHE News Blog

    An international learned society, concerned with supporting research and researchers into Higher Education

    Source link

  • From Feedback to Feedforward: Using AI-Powered Assessment Flywheel to Drive Student Competency – Faculty Focus

    From Feedback to Feedforward: Using AI-Powered Assessment Flywheel to Drive Student Competency – Faculty Focus

    Source link

  • How can evolving student attitudes inform institutional Gen-AI initiatives?

    How can evolving student attitudes inform institutional Gen-AI initiatives?

    This HEPI blog was authored by Isabelle Bristow, Managing Director UK and Europe at Studiosity.

    In a HEPI blog published almost a year ago, Student Voices on AI: Navigating Expectations and Opportunities, I reported the findings of global research Studiosity commissioned with YouGov on students’ attitudes towards artificial intelligence (AI). The intervening year would be considered a relatively small time period in a more regular higher education setting. However, given the rapid pace of change within the Gen-AI sphere, this one year is practically aeons.

    We have recently commissioned a further YouGov survey to explore the motivations, emotions, and needs of over 2,200 students from 151 universities in the UK.

    Below, I will cover the top five takeaways from this new round of research, but first, which students are using AI?

    • 64% of all students have used AI tools to help with assignments or study tasks.
    • International student use (87%) is a staggering 27% higher than their domestic student counterparts (60%).
    • There’s a 21% difference between students who identify as female who said they have never used AI tools for study tasks (42%) compared with those identifying as male (21%).
    • Only 17% of students studying business said they have never used it, compared with 46% studying Humanities and Social Sciences.
    • The highest reported use is by students studying in London at 78%, and conversely, the highest non-use was reported by students studying in Scotland at 44%.

    The Top Five Takeaways:

    1. There is an 11% increase from last year in students thinking that their university is adapting fast enough to provide AI study support tools.

    Following a year of global Gen-AI development and another year for institutions to adapt, students who believe their university is adjusting quickly enough remain in the minority this year at 47%, up from 36% in 2024. The remaining 53% of student respondents believe their institution has more to do.

    When asked if they expect their university to offer AI support tools to students, the result is the same as last year – with 39% of students answering yes to this question. This was significantly higher for male students at 51% (up by 3% from last year) and for international students 61% (up by 4% from last year). Once again, this year, business students have the highest expectations at 58% (just 1% higher than last year). Following this, medicine (53%), nursing (48%) and STEM (46%) were more likely to respond ‘Yes’ when asked if they expect their university to provide AI tools.

    1. Some students have concerns over academic integrity.

    When asked if they felt their university should provide AI tools, students who answered’ no’ were given a free text box to explain their reasoning. Most of these responses related to academic integrity.

    ‘I don’t think unis support its use because it helps students plagiarise and cheat.’

    ‘I think AI beats the whole idea of a degree, but it can be used for grammar correction and general fluidity.’

    ‘Because it would be unfair and result in the student not really learning or thinking for themselves.’

    Only 7% of students said they would use an AI tool for help with plagiarism or referencing (‘Ask my lecturer’ was at 30% and ‘Use a 24/7 university online writing feedback tool’ was at 21%).

    1. Students who use AI regularly are less likely to rank ‘fear of failing’ as one of their top three study stresses

    We asked all students – regardless of their AI use – of their top three reasons for feeling stressed about studying the responses were as follows:

    • 61% of all UK students included ‘fear of failing’ in their top 3 reasons for feeling stressed about studying;
    • 52% of all students included ‘balancing other commitments’; and
    • 41% of all students included ‘preparing for exams and assessments’.

    These statistics change when we filter by students who use AI tools to help with assignments or study tasks. Fear of failing is still the highest-ranked study stress. The percentage of respondents who rank fear of failing in their top three study stresses by AI use are as follows:

    • 69% for those who never use AI;
    • 62% for those who have used AI once or twice;
    • 58% for those who have used AI a few times and;
    • 50% for those who use AI regularly.

    Looking at the main reasons students want to use the university’s AI service for support or feedback, this year, ‘confidence’ (25%) overtook ‘speed’ (16%). Female respondents, in particular, are using AI for reasons relating to confidence at 29%, compared to 20% for male students. International students valued ‘skills’ the most at 20%, significantly higher than their domestic student counterparts at 11%.

    1. Students who feel like they belong are more likely to use AI.

    We examined the correlation between students’ sense of belonging in their university community, and the amount they use AI tools to help with assignments or study tasks.

    For students who feel like they belong, 67% said they have used AI tools to help with assignments or study tasks; this compares with 47% for students who do not feel like they belong.      

    5. Cognitive offloading (using technology to circumvent the ‘learning element’ of a task) is a top concern of academics and institutional leadership in 2025. However, student responses suggest they feel they are both learning and improving their skills when using generative tools.

    When asked if they were confident they are learning as well as improving their own skills when using generative tools, students responded as follows:

    • 12% ‘were extremely confident that they were learning and developing skills;
    • 31% were very confident;
    • 29% were moderately confident;
    • 26% were moderately confident; and
    • Only 5% were not at all confident that this was true.

    Conclusion:

    Reflecting on the three years since Gen-AI’s disruptive entrance into the mainstream, the sector has now come to terms with the power, potential, and risks of Gen-AI. There is also a significantly better understanding of the importance of ensuring these tools enhance student learning rather than undermining it by offloading cognitive effort.

    Leaders can look to a holistic approach to university-approved, trusted Gen-AI support, to improve student outcomes, experience and wellbeing.

    You can download the full Annual Global Student Wellbeing Survey – UK report here.

    Studiosity is a HEPI Partner. Studiosity is AI-for-Learning, not corrections – to scale student success, empower educators, and improve retention with a proven 4.4x ROI, while ensuring integrity and reducing institutional risk. Studiosity delivers ethical and formative feedback at scale to over 250 institutions worldwide. With unique AI-for-Learning technology, all students can benefit from formative feedback in minutes. From their first draft to just before submission, students receive personalised feedback – including guidance on how they can demonstrably improve their own work and critical thinking skills. Actionable insight is accessible to faculty and leaders, revealing the scale of engagement with support, cohorts requiring intervention, and measurable learning progress.

    Source link