Tag: Ethically

  • From detection to development: how universities are ethically embedding AI-for-learning

    From detection to development: how universities are ethically embedding AI-for-learning

    Author:
    Mike Larsen

    Published:

    • HEPI Director Nick Hillman’s verdict on the Budget can be found on the Times Higher website here.
    • Today’s blog was kindly authored by Mike Larsen, Chief Executive Officer at Studiosity, a HEPI Partner.

    The future of UK higher education rests upon the assurance of student learning outcomes. While GenAI presents the sector with immense opportunities for advancement and efficiency, the sector is constrained by an anachronistic model of plagiarism detection rooted in adversarialism. I believe the ‘Police and Punish’ model must now be replaced by ‘Support and Validate’.

    A reliance upon detection was perhaps once a necessary evil but it has never aligned with the fundamental values of higher education. The assumption that policing student behaviour is the only way to safeguard standards no longer applies.

    Such a punitive policy model has become increasingly untenable, consuming valuable university resources in unviable investigations and distracting from universities’ core mission. I believe there is a compelling alternative.

    As assessment methods undergo necessary change, higher education institutions must consciously evaluate the risks inherent in abandoning proven means of developing durable critical thinking and communication skills, such as academic writing. New learning and assessment methodologies are required but must be embraced via evidence and concurrently protect the core promise of higher education.

    An emerging policy framework for consideration and research is ‘support and validate’ which pairs timely, evidence-based academic support with student self-validation of authorship and learning.

    Building capability, confidence and competence provides the ideal preparation for graduates to embrace current and future technology in both the workplace and society.

    The combination of established and immediate academic writing feedback systems with advanced authorship and learning validation capabilities creates a robust and multi-layered solution capable of ensuring quality at scale.

    This is an approach built upon detecting learning, not cheating. Higher education leaders may recognise this integrated approach empowers learners and unburdens educators, without compromising quality. It ensures the capabilities uniquely developed by higher education, now needed more than ever, are extended and amplified rather than replaced by techno-solutionism.

    We must build a future where assessment security explicitly prioritises learning, not policing. For UK higher education, a pivot from punishment to capability-building and validation may be the only sustainable way to safeguard the value of the degree qualification.

    Studiosity’s AI-for-Learning platform scales student success at hundreds of universities across five continents, with research-backed evidence of impact. Studiosity has recently acquired Norvalid, a world leader in tech-enabled student self-validation of authorship and authentic learning, shifting how higher education approaches assessment security and learning.

     

    Source link

  • From Detection to Development: How Universities Are Ethically Embedding AI for Learning 

    From Detection to Development: How Universities Are Ethically Embedding AI for Learning 

    This HEPI blog was authored by Isabelle Bristow, Managing Director UK and Europe at Studiosity, a HEPI Partner.  

    The Universities UK Annual Conference always serves as a vital barometer for the higher education sector, and this year, few topics were as prominent as the role of Generative Artificial Intelligence (GenAI). A packed session, Ethical AI in Higher Education for improving learning outcomes: A policy and leadership discussion, provided a refreshing and pragmatic perspective, moving the conversation beyond academic integrity fears and towards genuine educational innovation. 

    Based on early findings from new independent research commissioned by Studiosity, the session’s panellists offered crucial insights and a clear path forward. 

    A new focus: from policing to pedagogy 

    For months, the discussion around Gen-AI has been dominated by concerns over academic misconduct and the development of detection tools. However, as HEPI Director Nick Hillman OBE highlighted, this new report takes a different tack. Its unique focus is on how AI can support active learning, rather than just how students are using it. 

    The findings, presented by independent researcher Rebecca Mace, show a direct correlation between the ethical use of AI for learning and improved student attainment and retention. Crucially, these positive effects were particularly noticeable among students often described as ‘non-traditional’. This reframes the conversation, positioning AI not as a threat to learning but as a powerful tool to enhance it, especially for those who need it most. 

    The analogy that works 

    The ferocious pace of AI’s introduction to the sector has undoubtedly caught many off guard. Professor Marc Griffiths, Pro-Vice Chancellor for Regional Partnerships, Engagement & Innovation at UWE Bristol, acknowledged this head-on, advocating for a dual approach of governance and ‘​​​​sand-boxing’ (the security practice of isolating and testing to make sure an application, system or platform is safe)  of new technologies. Instead of simply denying access, he argued, we must test new tools and develop clear guardrails for their use. 

    In a welcome departure from ​​​​​​​​the widely used but ultimately flawed calculator analogy (​​read more here Generative AI is not a ‘calculator for words’. 5 reasons why this idea is misleading), Professor Griffiths offered a more fitting one: the overhead projector. Like PowerPoint today, the projector was a new technology that was a conduit for content, but it never replaced the core act of teaching and learning itself. AI, he posited, is simply another conduit. It is what we put into it, and what we get out of it, that matters. 

    Evidenced insights and reframing the conversation 

    The panel also grappled with the core questions leaders must ask themselves. Stephanie Harris, Director of Policy at Universities UK posed two fundamental challenges: 

    • How can I safeguard my key product that I am offering to students? 
    • How can I prepare my students for the workforce if I don’t yet know how AI will be used in the future? 

    She stressed the importance of protecting the integrity of the educational experience to prevent an ‘erosion of trust’ between students and institutions. In response to the second question, both Steph and Marc emphasised the answer lies not in specific tech skills, but in timeless critical thinking skills that will prepare students not just for the next three years, but for the next 15. The conversation also touched upon the need for universities to consider students under 16 as the future pipeline, ensuring our policies and frameworks are future-proof. Steph mentioned further prompts for leaders to think about as listed in a UUK-authored, OfS blog Embracing innovation in high education: our approach to artificial intelligence – which was given a commonsense shorthand by Steph as ‘have fun, don’t be stupid!’.  

    The session drove home the importance of evidence-based insights. Dr David Pike, Head of Digital Learning at the University of Bedfordshire, shared key findings from his own research comparing student outcomes for Studiosity users versus those of non-Studiosity users, stating that the results were ‘very clear’ that students did improve at scale. He provided powerful data showing significant measurable academic progress, along with a large positive correlational impact on retention and progression. Dr. Pike concluded that, given this demonstrated positive impact, we should be calling the technology ‘Assisted Intelligence,’ because when used correctly, that is exactly what it is. 

    A guiding framework of values 

    To navigate this new landscape, Professor Griffiths laid out seven core values that must underpin institutional policy on AI: 

    1. Academic integrity: Supporting learning, not replacing it. 
    1. Equity of access: Addressing the real challenge of paywalls. 
    1. Transparency: Clearly communicating how students will be supported. 
    1. Ethical Responsibility 
    1. Empowerment and Capability Building 
    1. Resilience 
    1. Adaptability 

    These values offer a robust framework for leaders looking to create policies that are both consistent and fair, ensuring that AI use aligns with a university’s mission. 

    The policy challenge of digital inequality 

    The issue of equity of access was explored in greater detail by Nick Hillman, who connected the digital divide to the broader student funding landscape. He pointed out that no government had commissioned a proper review on the actual cost of being a student since 1958. With modern student life costing upwards of £20,000 annually if a student wants to involve themselves fully in student life. He made a powerful case for increased maintenance support to match an increased tuition fee, which would also help prevent further disparity between those who can afford premium tech tools and those who cannot. This highlights that addressing digital inequality is not just a technical challenge; it is a fundamental policy one too. 

    In closing 

    The session’s core message was clear: while the rise of AI has been rapid, the sector’s response does not have to be only reactive. By embracing a proactive, values-led approach that prioritises ethical development, equity and human-centric learning, universities can turn what was once seen as a threat into a powerful catalyst for positive change. 

    Studiosity is AI-for-Learning, not corrections – to scale student success, empower educators, and improve retention with a proven , while ensuring integrity and reducing institutional risk. 

    Source link