Category: knowledge

  • Widely used but barely trusted: understanding student perceptions on the use of generative AI in higher education

    Widely used but barely trusted: understanding student perceptions on the use of generative AI in higher education

    by Carmen Cabrera and Ruth Neville

    Generative artificial intelligence (GAI) tools are rapidly transforming how university students learn, create and engage with knowledge. Powered by techniques such as neural network algorithms, these tools generate new content, including text, tables, computer code, images, audio and video, by learning patterns from existing data. The outputs are usually characterised by their close resemblance to human-generated content. While GAI shows great promise to improve the learning experience in various disciplines, its growing uptake also raises concerns about misuse, over-reliance and more generally, its impact on the learning process. In response, multiple UK HE institutions have issued guidance outlining acceptable use and warning against breaches of academic integrity. However, discussions about the role of GAI in the HE learning process have been led mostly by educators and institutions, and less attention has been given to how students perceive and use GAI.

    Our recent study, published in Perspectives: Policy and Practice in Higher Education, helps to address this gap by bringing student perspectives into the discussion. Drawing on a survey conducted in early 2024 with 132 undergraduate students from six UK universities, the study reveals an impactful paradox. Students are using GAI tools widely, and expect their use to increase, yet fewer than 25% regard its outputs as reliable. High levels of use therefore coexist with low levels of trust.

    Using GAI without trusting it

    At first glance, the widespread use of GAI among students might be taken as a sign of growing confidence in these tools. Yet, when students are asked about their perceptions on the reliability of GAI outputs, many express disagreement when asked if GAI could be considered a reliable source of knowledge. This apparent contradiction raises the question of why are students still using tools they do not fully trust? The answer lies in the convenience of GAI. Students are not necessarily using GAI because they believe it is accurate. They are using it because it is fast, accessible and can help them get started or work more efficiently. Our study suggests that perceived usefulness may be outweighing the students’ scepticism towards the reliability of outputs, as this scepticism does not seem to be slowing adoption. Nearly all student groups surveyed reported that they expect to continue using generative AI in the future, indicating that low levels of trust are unlikely to deter ongoing or increased use.

    Not all perceptions are equal

    While the “high use – low trust” paradox is evident across student groups, the study also reveals systematic differences in the adoption and perceptions of GAI by gender and by domicile status (UK v international students). Male and international students tend to report higher levels of both past and anticipated future use of GAI tools, and more permissive attitudes towards AI-assisted learning compared to female and UK-domiciled students. These differences should not necessarily be interpreted as evidence that some students are more ethical, critical or technologically literate than others. What we are likely seeing are responses to different pressures and contexts shaping how students engage with these tools. Particularly for international students, GAI can help navigate language barriers or unfamiliar academic conventions. In those circumstances, GAI may work as a form of academic support rather than a shortcut. Meanwhile, differences in attitudes by gender reflect wider patterns often observed on academic integrity and risk-taking, where female students often report greater concern about following rules and avoiding sanctions. These findings suggest that students’ engagement with GAI is influenced by their positionality within Higher Education, and not just by their individual attitudes.

    Different interpretations of institutional guidance

    Discrepancies by gender and domicile status go beyond patterns of use and trust, extending to how students interpret institutional guidance on generative AI. Most UK universities now publish policies outlining acceptable and unacceptable uses of GAI in relation to assessment and academic integrity, and typically present these rules as applying uniformly to all students. In practice, as evidenced by our study, students interpret these guidelines differently. UK-domiciled students, especially women, tend to adopt more cautious readings, sometimes treating permitted uses, such as using GAI for initial research or topic overviews, as potential misconduct. International students, by contrast, are more likely to express permissive or uncertain views, even in relation to practices that are more clearly prohibited. Shared rules do not guarantee shared understanding, especially if guidance is ambiguous or unevenly communicated. GAI is evolving faster than University policy, so addressing this unevenness in understanding is an urgent challenge for higher education.

    Where does the ‘problem’ lie?

    Students are navigating rapidly evolving technologies within assessment frameworks that were not designed with GAI in mind. At the same time, they are responding to institutional guidance that is frequently high-level, unevenly communicated and difficult to translate into everyday academic practice. Yet there is a tendency to treat GAI misuse as a problem stemming from individual student behaviour. Our findings point instead to structural and systemic issues shaping how students engage with these tools. From this perspective, variation in student behaviour could reflect the uneven inclusivity of current institutional guidelines. Even when policies are identical for all, the evidence indicates that they are not experienced in the same way across student groups, calling for a need to promote fairness and reduce differential risk at the institutional level.

    These findings also have clear implications for assessment and teaching. Since students are already using GAI widely, assessment design needs to avoid reactive attempts to exclude GAI. A more effective and equitable approach may involve acknowledging GAI use where appropriate, supporting students to engage with it critically and designing learning activities that continue to cultivate critical thinking, judgement and communication skills. In some cases, this may also mean emphasising in-person, discussion-based or applied forms of assessment where GAI offers limited advantage. Equally, digital literacy initiatives need to go beyond technical competence. Students require clearer and more concrete examples of what constitutes acceptable and unacceptable use of GAI in specific assessment contexts, as well as opportunities to discuss why these boundaries exist. Without this, institutions risk creating environments in which some students become too cautious in using GAI, while others cross lines they do not fully understand.

    More broadly, policymakers and institutional leaders should avoid assuming a single student response to GAI. As this study shows, engagement with these tools is shaped by gender, educational background, language and structural pressures. Treating the student body as homogeneous risks reinforcing existing inequalities rather than addressing them. Public debate about GAI in HE frequently swings between optimism and alarm. This research points to a more grounded reality where students are not blindly trusting AI, but their use of it is increasing, sometimes pragmatically, sometimes under pressure. As GAI systems continue evolving, understanding how students navigate these tools in practice is essential to developing policies, assessments and teaching approaches that are both effective and fair.

    You can find more information in our full research paper: https://www.tandfonline.com/doi/full/10.1080/13603108.2025.2595453

    Dr Carmen Cabrera is a Lecturer in Geographic Data Science at the Geographic Data Science Lab, within the University of Liverpool’s Department of Geography and Planning. Her areas of expertise are geographic data science, human mobility, network analysis and mathematical modelling. Carmen’s research focuses on developing quantitative frameworks to model and predict human mobility patterns across spatiotemporal scales and population groups, ranging from intraurban commutes to migratory movements. She is particularly interested in establishing methodologies to facilitate the efficient and reliable use of new forms of digital trace data in the study of human movement. Prior to her position as a Lecturer, Carmen completed a BSc and MSc in Physics and Applied Mathematics, specialising in Network Analysis. She then did a PhD at University College London (UCL), focussing on the development of mathematical models of social behaviours in urban areas, against the theoretical backdrop of agglomeration economies. After graduating from her PhD in 2021, she was a Research Fellow in Urban Mobility at the Centre for Advanced Spatial Analysis (CASA), at UCL, where she currently holds a honorary position.

    Dr Ruth Neville is a Research Fellow at the Centre for Advanced Spatial Analysis (CASA), UCL, working at the intersection of Spatial Data Science, Population Geography and Demography. Her PhD research considers the driving forces behind international student mobility into the UK, the susceptibility of student applications to external shocks, and forecasting future trends in applications using machine learning. Ruth has also worked on projects related to human mobility in Latin America during the COVID-19 pandemic, the relationship between internal displacement and climate change in the East and Horn of Africa, and displacement of Ukrainian refugees. She has a background in Political Science, Economics and Philosophy, with a particular interest in electoral behaviour.

    Author: SRHE News Blog

    An international learned society, concerned with supporting research and researchers into Higher Education

    Source link

  • Who gets to decide what counts as knowledge? Big tech, AI, and the future of epistemic agency in higher education

    Who gets to decide what counts as knowledge? Big tech, AI, and the future of epistemic agency in higher education

    by Mehreen Ashraf, Eimear Nolan, Manual F Ramirez, Gazi Islam and Dirk Lindebaum

    Walk into almost any university today, and you can be sure to encounter the topic of AI and how it affects higher education (HE). AI applications, especially large language models (LLM), have become part of everyday academic life, being used for drafting outlines, summarising readings, and even helping students to ‘think’. For some, the emergence of LLMs is a revolution that makes learning more efficient and accessible. For others, it signals something far more unsettling: a shift in how and by whom knowledge is controlled. This latter point is the focus of our new article published in Organization Studies.

    At the heart of our article is a shift in what is referred to epistemic (or knowledge) governance: the way in which knowledge is created, organised, and legitimised in HE. In plain terms, epistemic governance is about who gets to decide what counts as credible, whose voices are heard, and how the rules of knowing are set. Universities have historically been central to epistemic governance through peer review, academic freedom, teaching, and the public mission of scholarship. But as AI tools become deeply embedded in teaching and research, those rules are being rewritten not by educators or policymakers, but by the companies that own the technology.

    From epistemic agents to epistemic consumers

    Universities, academics, and students have traditionally been epistemic agents: active producers and interpreters of knowledge. They ask questions, test ideas, and challenge assumptions. But when we rely on AI systems to generate or validate content, we risk shifting from being agents of knowledge to consumers of knowledge. Technology takes on the heavy cognitive work: it finds sources, summarises arguments, and even produces prose that sounds academic. However, this efficiency comes at the cost of profound changes in the nature of intellectual work.

    Students who rely on AI to tidy up their essays, or generate references, will learn less about the process of critically evaluating sources, connecting ideas and constructing arguments, which are essential for reasoning through complex problems. Academics who let AI draft research sections, or feed decision letters and reviewer reports into AI with the request that AI produces a ‘revision strategy’, might save time but lose the slow, reflective process that leads to original thought, while undercutting their own agency in the process. And institutions that embed AI into learning systems hand part of their epistemic governance – their authority to define what knowledge is and how it is judged – to private corporations.

    This is not about individual laziness; it is structural. As Shoshana Zuboff argued in The age of surveillance capitalism, digital infrastructures do not just collect information, they reorganise how we value and act upon it. When universities become dependent on tools owned by big tech, they enter an ecosystem where the incentives are commercial, not educational.

    Big tech and the politics of knowing

    The idea that universities might lose control of knowledge sounds abstract, but it is already visible. Jisc’s 2024 framework on AI in tertiary education warns that institutions must not ‘outsource their intellectual labour to unaccountable systems,’ yet that outsourcing is happening quietly. Many UK universities, including the University of Oxford, have signed up to corporate AI platforms to be used by staff and students alike. This, in turn, facilitates the collection of data on learning behaviours that can be fed back into proprietary models.

    This data loop gives big tech enormous influence over what is known and how it is known. A company’s algorithm can shape how research is accessed, which papers surface first, or which ‘learning outcomes’ appear most efficient to achieve. That’s epistemic governance in action: the invisible scaffolding that structures knowledge behind the scenes. At the same time, it is easy to see why AI technologies appeal to universities under pressure. AI tools promise speed, standardisation, lower costs, and measurable performance, all seductive in a sector struggling with staff shortages and audit culture. But those same features risk hollowing out the human side of scholarship: interpretation, dissent, and moral reasoning. The risk is not that AI will replace academics but that it will change them, turning universities from communities of inquiry into systems of verification.

    The Humboldtian ideal and why it is still relevant

    The modern research university was shaped by the 19th-century thinker Wilhelm von Humboldt, who imagined higher education as a public good, a space where teaching and research were united in the pursuit of understanding. The goal was not efficiency: it was freedom. Freedom to think, to question, to fail, and to imagine differently.

    That ideal has never been perfectly achieved, but it remains a vital counterweight to market-driven logics that render AI a natural way forward in HE. When HE serves as a place of critical inquiry, it nourishes democracy itself. When it becomes a service industry optimised by algorithms, it risks producing what Žižek once called ‘humans who talk like chatbots’: fluent, but shallow.

    The drift toward organised immaturity

    Scholars like Andreas Scherer and colleagues describe this shift as organised immaturity: a condition where sociotechnical systems prompt us to stop thinking for ourselves. While AI tools appear to liberate us from labour, what is happening is that they are actually narrowing the space for judgment and doubt.

    In HE, that immaturity shows up when students skip the reading because ‘ChatGPT can summarise it’, or when lecturers rely on AI slides rather than designing lessons for their own cohort. Each act seems harmless; but collectively, they erode our epistemic agency. The more we delegate cognition to systems optimised for efficiency, the less we cultivate the messy, reflective habits that sustain democratic thinking. Immanuel Kant once defined immaturity as ‘the inability to use one’s understanding without guidance from another.’ In the age of AI, that ‘other’ may well be an algorithm trained on millions of data points, but answerable to no one.

    Reclaiming epistemic agency

    So how can higher education reclaim its epistemic agency? The answer lies not only in rejecting AI but also in rethinking our possible relationships with it. Universities need to treat generative tools as objects of inquiry, not an invisible infrastructure. That means embedding critical digital literacy across curricula: not simply training students to use AI responsibly, but teaching them to question how it works, whose knowledge it privileges, and whose it leaves out.

    In classrooms, educators could experiment with comparative exercises: have students write an essay on their own, then analyse an AI version of the same task. What’s missing? What assumptions are built in? How were students changed when the AI wrote the essay for them and when they wrote them themselves? As the Russell Group’s 2024 AI principles note, ‘critical engagement must remain at the heart of learning.’

    In research, academics too must realise that their unique perspectives, disciplinary judgement, and interpretive voices matter, perhaps now more than ever, in a system where AI’s homogenisation of knowledge looms. We need to understand that the more we subscribe to values of optimisation and efficiency as preferred ways of doing academic work, the more natural the penetration of AI into HE will unfold.

    Institutionally, universities might consider building open, transparent AI systems through consortia, rather than depending entirely on proprietary tools. This isn’t just about ethics; it’s about governance and ensuring that epistemic authority remains a public, democratic responsibility.

    Why this matters to you

    Epistemic governance and epistemic agency may sound like abstract academic terms, but they refer to something fundamental: the ability of societies and citizens (not just ‘workers’) to think for themselves when/if universities lose control over how knowledge is created, validated and shared. When that happens, we risk not just changing education but weakening democracy. As journalist George Monbiot recently wrote, ‘you cannot speak truth to power if power controls your words.’ The same is true for HE. We cannot speak truth to power if power now writes our essays, marks our assignments, and curates our reading lists.

    Mehreen Ashraf is an Assistant Professor at Cardiff Business School, Cardiff University, United Kingdom.

    Eimear Nolan is an Associate Professor in International Business at Trinity Business School, Trinity College Dublin, Ireland.

    Manuel F Ramirez is Lecturer in Organisation Studies at the University of Liverpool Management School, UK.

    Gazi Islam is Professor of People, Organizations and Society at Grenoble Ecole de Management, France.

    Dirk Lindebaum is Professor of Management and Organisation at the School of Management, University of Bath.

    Author: SRHE News Blog

    An international learned society, concerned with supporting research and researchers into Higher Education

    Source link