Tag: Understanding

  • Widely used but barely trusted: understanding student perceptions on the use of generative AI in higher education

    Widely used but barely trusted: understanding student perceptions on the use of generative AI in higher education

    by Carmen Cabrera and Ruth Neville

    Generative artificial intelligence (GAI) tools are rapidly transforming how university students learn, create and engage with knowledge. Powered by techniques such as neural network algorithms, these tools generate new content, including text, tables, computer code, images, audio and video, by learning patterns from existing data. The outputs are usually characterised by their close resemblance to human-generated content. While GAI shows great promise to improve the learning experience in various disciplines, its growing uptake also raises concerns about misuse, over-reliance and more generally, its impact on the learning process. In response, multiple UK HE institutions have issued guidance outlining acceptable use and warning against breaches of academic integrity. However, discussions about the role of GAI in the HE learning process have been led mostly by educators and institutions, and less attention has been given to how students perceive and use GAI.

    Our recent study, published in Perspectives: Policy and Practice in Higher Education, helps to address this gap by bringing student perspectives into the discussion. Drawing on a survey conducted in early 2024 with 132 undergraduate students from six UK universities, the study reveals an impactful paradox. Students are using GAI tools widely, and expect their use to increase, yet fewer than 25% regard its outputs as reliable. High levels of use therefore coexist with low levels of trust.

    Using GAI without trusting it

    At first glance, the widespread use of GAI among students might be taken as a sign of growing confidence in these tools. Yet, when students are asked about their perceptions on the reliability of GAI outputs, many express disagreement when asked if GAI could be considered a reliable source of knowledge. This apparent contradiction raises the question of why are students still using tools they do not fully trust? The answer lies in the convenience of GAI. Students are not necessarily using GAI because they believe it is accurate. They are using it because it is fast, accessible and can help them get started or work more efficiently. Our study suggests that perceived usefulness may be outweighing the students’ scepticism towards the reliability of outputs, as this scepticism does not seem to be slowing adoption. Nearly all student groups surveyed reported that they expect to continue using generative AI in the future, indicating that low levels of trust are unlikely to deter ongoing or increased use.

    Not all perceptions are equal

    While the “high use – low trust” paradox is evident across student groups, the study also reveals systematic differences in the adoption and perceptions of GAI by gender and by domicile status (UK v international students). Male and international students tend to report higher levels of both past and anticipated future use of GAI tools, and more permissive attitudes towards AI-assisted learning compared to female and UK-domiciled students. These differences should not necessarily be interpreted as evidence that some students are more ethical, critical or technologically literate than others. What we are likely seeing are responses to different pressures and contexts shaping how students engage with these tools. Particularly for international students, GAI can help navigate language barriers or unfamiliar academic conventions. In those circumstances, GAI may work as a form of academic support rather than a shortcut. Meanwhile, differences in attitudes by gender reflect wider patterns often observed on academic integrity and risk-taking, where female students often report greater concern about following rules and avoiding sanctions. These findings suggest that students’ engagement with GAI is influenced by their positionality within Higher Education, and not just by their individual attitudes.

    Different interpretations of institutional guidance

    Discrepancies by gender and domicile status go beyond patterns of use and trust, extending to how students interpret institutional guidance on generative AI. Most UK universities now publish policies outlining acceptable and unacceptable uses of GAI in relation to assessment and academic integrity, and typically present these rules as applying uniformly to all students. In practice, as evidenced by our study, students interpret these guidelines differently. UK-domiciled students, especially women, tend to adopt more cautious readings, sometimes treating permitted uses, such as using GAI for initial research or topic overviews, as potential misconduct. International students, by contrast, are more likely to express permissive or uncertain views, even in relation to practices that are more clearly prohibited. Shared rules do not guarantee shared understanding, especially if guidance is ambiguous or unevenly communicated. GAI is evolving faster than University policy, so addressing this unevenness in understanding is an urgent challenge for higher education.

    Where does the ‘problem’ lie?

    Students are navigating rapidly evolving technologies within assessment frameworks that were not designed with GAI in mind. At the same time, they are responding to institutional guidance that is frequently high-level, unevenly communicated and difficult to translate into everyday academic practice. Yet there is a tendency to treat GAI misuse as a problem stemming from individual student behaviour. Our findings point instead to structural and systemic issues shaping how students engage with these tools. From this perspective, variation in student behaviour could reflect the uneven inclusivity of current institutional guidelines. Even when policies are identical for all, the evidence indicates that they are not experienced in the same way across student groups, calling for a need to promote fairness and reduce differential risk at the institutional level.

    These findings also have clear implications for assessment and teaching. Since students are already using GAI widely, assessment design needs to avoid reactive attempts to exclude GAI. A more effective and equitable approach may involve acknowledging GAI use where appropriate, supporting students to engage with it critically and designing learning activities that continue to cultivate critical thinking, judgement and communication skills. In some cases, this may also mean emphasising in-person, discussion-based or applied forms of assessment where GAI offers limited advantage. Equally, digital literacy initiatives need to go beyond technical competence. Students require clearer and more concrete examples of what constitutes acceptable and unacceptable use of GAI in specific assessment contexts, as well as opportunities to discuss why these boundaries exist. Without this, institutions risk creating environments in which some students become too cautious in using GAI, while others cross lines they do not fully understand.

    More broadly, policymakers and institutional leaders should avoid assuming a single student response to GAI. As this study shows, engagement with these tools is shaped by gender, educational background, language and structural pressures. Treating the student body as homogeneous risks reinforcing existing inequalities rather than addressing them. Public debate about GAI in HE frequently swings between optimism and alarm. This research points to a more grounded reality where students are not blindly trusting AI, but their use of it is increasing, sometimes pragmatically, sometimes under pressure. As GAI systems continue evolving, understanding how students navigate these tools in practice is essential to developing policies, assessments and teaching approaches that are both effective and fair.

    You can find more information in our full research paper: https://www.tandfonline.com/doi/full/10.1080/13603108.2025.2595453

    Dr Carmen Cabrera is a Lecturer in Geographic Data Science at the Geographic Data Science Lab, within the University of Liverpool’s Department of Geography and Planning. Her areas of expertise are geographic data science, human mobility, network analysis and mathematical modelling. Carmen’s research focuses on developing quantitative frameworks to model and predict human mobility patterns across spatiotemporal scales and population groups, ranging from intraurban commutes to migratory movements. She is particularly interested in establishing methodologies to facilitate the efficient and reliable use of new forms of digital trace data in the study of human movement. Prior to her position as a Lecturer, Carmen completed a BSc and MSc in Physics and Applied Mathematics, specialising in Network Analysis. She then did a PhD at University College London (UCL), focussing on the development of mathematical models of social behaviours in urban areas, against the theoretical backdrop of agglomeration economies. After graduating from her PhD in 2021, she was a Research Fellow in Urban Mobility at the Centre for Advanced Spatial Analysis (CASA), at UCL, where she currently holds a honorary position.

    Dr Ruth Neville is a Research Fellow at the Centre for Advanced Spatial Analysis (CASA), UCL, working at the intersection of Spatial Data Science, Population Geography and Demography. Her PhD research considers the driving forces behind international student mobility into the UK, the susceptibility of student applications to external shocks, and forecasting future trends in applications using machine learning. Ruth has also worked on projects related to human mobility in Latin America during the COVID-19 pandemic, the relationship between internal displacement and climate change in the East and Horn of Africa, and displacement of Ukrainian refugees. She has a background in Political Science, Economics and Philosophy, with a particular interest in electoral behaviour.

    Author: SRHE News Blog

    An international learned society, concerned with supporting research and researchers into Higher Education

    Source link

  • Getting Consistent Results from AI: Understanding the AI Context Window

    Getting Consistent Results from AI: Understanding the AI Context Window

    From the Cool Cat Teacher Blog by Vicki Davis

    Subscribe to the 10 Minute Teacher Podcast anywhere you listen to podcasts.

    Too many teachers are struggling with getting good results from AI. Sometimes it just goes “off the rails,” or they might get good results one day and not the next. The same thing happens with students as they use AI. Problems caused by not understanding how AI works leads to lost time, confusion, and sometimes even student misuse. Much of this problem is related to the context window of AI. How do we understand AI and the context window? How do we teach about this?

    Today’s guest is Rob the AI Guy. He started his work leading a major social media agency but now runs a thriving community focused on AI agents and automation. He is my favorite AI YouTuber, and I joined his Skool this year, which I’ve found to be my most helpful resource for keeping up with everything AI. Rob clearly explains how AI works, but perhaps the most useful concept he shares is how the context window works. If you’re struggling to use AI well, trying to understand it, or grappling with how to explain it to your students, this is the show for you.


    Key Takeaways for Teachers

    • AI tools behave inconsistently because they rely on a limited context window that fills up and resets.
    • Starting a new chat can dramatically improve AI responses when results drift or become unreliable.
    • Different AI models have different strengths, and using more than one can lead to better outcomes.
    • Students need explicit instruction on how AI works so they don’t overtrust or misuse it.
    • Critical thinking is more important than memorization in a world where AI can retrieve information instantly.

    Visual Summary

    I created this infographic from this show’s transcript to give a visual overview of some of the topics discusse din the show. I used Google’s Notebook LM.

    Watch the Show

    YouTube Video
    Watch this video on YouTube.Subscribe to the Cool Cat Teacher Channel on YouTube

    Listen to the Podcast

    Robert Benjamin – Author Bio as Submitted

    Robert Benjamin also known as “Rob the AI Guy”

    Robert Benjamin breaks down the latest in AI automation as a serial entrepreneur and YouTuber known for making complex technology accessible. Having consulted with over 350,000 clients through his social media agency, he now runs a thriving community focused on AI agents and automation, bridging the gap between cutting-edge technology and real-world business applications.

    Blog: https://www.skool.com/ai-automation-school

    The post Getting Consistent Results from AI: Understanding the AI Context Window appeared first on Cool Cat Teacher Blog by Vicki Davis @coolcatteacher helping educators be excellent every day. Meow!

    If you’re seeing this on another site, they are “scraping” my feed and taking my content to present it to you so be aware of this.

    Source link

  • Higher Education Inquirer : Understanding U.S. Campus Safety and Mental Health: Guidance for International Students

    Higher Education Inquirer : Understanding U.S. Campus Safety and Mental Health: Guidance for International Students

    The tragic shooting at Brown University in December 2025, which claimed two lives and left nine students wounded, is a stark reminder that even elite U.S. campuses are not immune to violence. For international students, understanding this incident requires placing it in the broader context of the United States’ history of social dangers, treatment of mental illness, and policies affecting foreigners.

    The United States has historically had higher rates of violent crime, including gun-related incidents, than many other developed nations. While campus shootings remain statistically rare, they reflect deeper societal issues: widespread gun access, social inequality, and a culture that often prioritizes armed self-protection over preventative public safety measures. Universities, traditionally viewed as open spaces for learning and discussion, are increasingly sites of surveillance and armed response, reshaping the student experience.

    Foreign students and immigrants may face additional vulnerabilities. Throughout U.S. history, immigrants have often been subject to discrimination, harassment, or violence based on nationality, race, or religion. Universities are not insulated from these pressures, and international students can be particularly susceptible to microaggressions, exclusion, or even targeted hostility. These risks were heightened under the Trump administration, when rhetoric and policies frequently cast foreigners as suspicious or undesirable. Visa restrictions, heightened scrutiny of foreign scholars, and public statements fostering distrust created an environment in which international students might feel unsafe or isolated.

    Mental illness plays a critical role in understanding campus violence, but its treatment in the United States is inconsistent. While many universities provide counseling centers, therapy services, and crisis hotlines, the broader mental health system in the U.S. remains fragmented and under-resourced. Access often depends on insurance coverage, ability to pay, and proximity to care, leaving some individuals untreated or inadequately supported. Cultural stigmas and underdiagnosis can exacerbate the problem, particularly among minority and immigrant populations. International students, unfamiliar with local mental health norms or hesitant to seek care due to cost or cultural barriers, may be less likely to access help until crises arise.

    U.S. universities deploy extensive surveillance systems, emergency protocols, and campus police to respond to threats. These measures aim to mitigate harm once an incident occurs but focus less on prevention of violence or addressing underlying causes, including untreated mental illness. Students are required to participate in drills and safety training, creating a reactive rather than preventative model.

    Compared to other countries, the U.S. approach is distinct. Canadian universities emphasize mental health support and unarmed security. European campuses often maintain open environments with minimal surveillance and preventive intervention strategies. Many Asian universities operate in low-crime contexts with community-based safety measures rather than extensive surveillance. The U.S. approach emphasizes rapid law enforcement response and monitoring, reflecting a society with higher firearm prevalence and less coordinated mental health infrastructure.

    The Brown University tragedy underscores a sobering reality for international students: while the U.S. offers world-class education, it is a nation with elevated risks of violent crime, inconsistent mental health care, and historical and ongoing challenges for foreigners. Awareness, preparedness, community engagement, and proactive mental health support are essential tools for international students navigating higher education in this environment.


    Sources

    The Guardian: Brown University shooting: police release more videos of person of interest as FBI offers reward

    Reuters: Manhunt for Brown University shooter stretches into fourth day

    Washington Post: Hunt for Brown University gunman starts anew as tension rises

    AP News: Brown University shooting victims identified

    People: Brown University shooting victim Kendall Turner

    WUSF: Brown University shooting victims update

    Wikipedia: 2025 Brown University shooting

    Pew Research Center: International Students in the United States

    Brookings Institution: Immigrant Vulnerability and Safety in the U.S.

    National Alliance on Mental Illness: Mental Health in Higher Education

    Journal of American College Health: Mental Health Services Utilization Among College Students

    Source link

  • A Critical Tool for Understanding U.S. Intervention in Latin America

    A Critical Tool for Understanding U.S. Intervention in Latin America

    Barbara Trent’s 1992 documentary The Panama Deception remains an essential work for comprehending the nature and impact of U.S. military interventions in Latin America. Winner of the Academy Award for Best Documentary Feature, the film offers a critical examination of the December 1989 U.S. invasion of Panama—Operation Just Cause—and challenges official narratives about that intervention.

    Reframing the Official Narrative

    The Panama Deception scrutinizes the publicly stated justifications for the 1989 invasion, including protecting U.S. citizens, defending democracy, and combating drug trafficking. In doing so, it highlights stark discrepancies between government claims and on‑the‑ground realities, arguing that the invasion served broader strategic interests rather than solely humanitarian or legal objectives.

    The film foregrounds how the U.S. government and mainstream media shaped public perceptions, often downplaying civilian casualties and simplifying complex political dynamics to justify military action. By exposing this media bias, the documentary encourages viewers to question official accounts and consider how information is framed in service of policy goals. 

    Human Cost and Civilian Impact

    A central contribution of the documentary is its focus on the human toll of the invasion. Using firsthand testimonies, footage of destruction, and accounts of displacement, the film documents the suffering of Panamanian civilians—particularly in impoverished neighborhoods such as El Chorrillo—which received limited attention in U.S. media coverage. 

    These portrayals deepen our understanding of how military interventions affect everyday lives beyond abstract geopolitical objectives. For students and scholars of international relations, human rights, and media studies, this emphasis provides a critical counterpoint to sanitized official histories.

    Historical and Geopolitical Context

    The Panama Deception situates the Panama invasion within a longer history of U.S. influence in the region. It suggests that long‑standing strategic interests—including control over the Panama Canal and hemispheric dominance—shaped U.S. policy long before the invasion’s official rationales were publicly articulated.

    While the Torrijos‑Carter Treaties mandated transfer of canal control to Panama by 1999, the film and many independent observers argue that U.S. policymakers were intent on maintaining influence and minimizing threats to American strategic goals. 

    Media Critique and Public Perception

    One of the documentary’s most enduring contributions lies in its examination of media complicity. The Panama Deception demonstrates how mainstream outlets often uncritically echoed government talking points, marginalizing dissenting voices from Panamanian civilians, independent journalists, and human rights advocates. 

    This critique remains relevant for students exploring how propaganda, framing, and selective reporting can influence public support for foreign policy decisions. The film thus serves as a case study for media literacy alongside political critique.

    Conclusion: Educational and Analytical Value

    The Panama Deception offers a multifaceted analysis of U.S. interventionism that transcends a single historical event. By combining archival evidence, eyewitness accounts, and critical commentary, it provides learners with a structured means to examine the intersections of power, narrative, and human consequence. For educators and researchers in Latin American studies, political science, and media studies, the film underscores the importance of questioning official narratives and exploring the lived effects of foreign policy decisions.


    Selected Sources

    • The Panama Deception (1992 documentary overview and details). 

    • Analysis of the film’s critique of U.S. media and government narratives. 

    • Historical context on media bias and human impact. 

    • Wider context on the invasion and implications related to Panama Canal treaty issues. 

    Source link

  • Adult Student Priorities Survey: Understanding Your Adult Learners 

    Adult Student Priorities Survey: Understanding Your Adult Learners 

    The Adult Student Priorities Survey (ASPS) is the instrument in the family of Satisfaction-Priorities Surveys that best captures the experiences of graduate level students and adult learners in undergraduate programs at four-year institutions. The Adult Student Priorities Survey provides the student perspectives for non-traditional populations along with external national benchmarks to inform decision-making for nearly 100 institutions across the country.

    Why the Adult Student Priorities Survey matters

    As a comprehensive survey instrument, the Adult Student Priorities Survey assesses student satisfaction within the context of the level of importance that students place on a variety of experiences, both inside and outside of the classroom. The combination of satisfaction and importance scores provides the identification of institutional strengths (areas of high importance and high satisfaction) and institutional challenges (areas of high importance and low satisfaction). Strengths can be celebrated, and challenges can be addressed by campus leadership to build on the good where possible and to re-enforce other areas where needed.

    With the survey implementation, all currently enrolled students (based on who the institution wants to include) can provide feedback on their experiences with instruction, advising, registration, recruitment/financial aid, support services and how they feel as a student at the institution. The results deliver external benchmarks with other institutions serving adult learners, including data that is specific to graduate programs, and the ability to monitor internal benchmarks when the survey is administered over multiple years. (The national student satisfaction results are published annually). The delivered results also provide the option to analyze subset data for all standard and customizable demographic indicators to understand where targeted initiatives may be required to best serve student populations.

    Connecting ASPS data to student success and retention

    Like the Student Satisfaction Inventory and the Priorities Survey for Online Learners (the other survey instruments in the Satisfaction-Priorities family), the data gathered by the Adult Student Priorities Survey can support multiple initiatives on campus including to inform student success efforts, to provide the student voice for strategic planning, to document priorities for accreditation purposes and to highlight positive messaging for recruitment activities. Student satisfaction has been positively linked with higher individual student retention and higher institutional graduation rates, getting right to the heart of higher education student success.

    Learn more about best practices for administering the online Adult Student Priorities Survey at your institution, which can be done any time during the academic year on the institutions’ timeline.

    Ask for a complimentary consultation with our student success experts

    What is your best approach to increasing student retention and completion? Our experts can help you identify roadblocks to student persistence and maximize student progression. Reach out to set up a time to talk.

    Request now

    Source link

  • Understanding how inflation affects teacher well-being and career decisions

    Understanding how inflation affects teacher well-being and career decisions

    Key points:

    In recent years, the teaching profession has faced unprecedented challenges, with inflation emerging as a significant factor affecting educators’ professional lives and career choices. This in-depth examination delves into the complex interplay between escalating inflation rates and the self-efficacy of educators–their conviction in their capacity to proficiently execute their pedagogical responsibilities and attain the desired instructional outcomes within the classroom environment.

    The impact of inflation on teachers’ financial stability has become increasingly evident, with many educators experiencing a substantial decline in their “real wages.” While nominal salaries remain relatively stagnant, the purchasing power of teachers’ incomes continues to erode as the cost of living rises. This economic pressure has created a concerning dynamic where educators, despite their professional dedication, find themselves struggling to maintain their standard of living and meet basic financial obligations.

    A particularly troubling trend has emerged in which teachers are increasingly forced to seek secondary employment to supplement their primary income. Recent surveys indicate that approximately 20 percent of teachers now hold second jobs during the academic year, with this percentage rising to nearly 30 percent during summer months. This necessity to work multiple jobs can lead to physical and mental exhaustion, potentially compromising teachers’ ability to maintain the high levels of energy and engagement required for effective classroom instruction.

    The phenomenon of “moonlighting” among educators has far-reaching implications for teacher self-efficacy. When teachers must divide their attention and energy between multiple jobs, their capacity to prepare engaging lessons, grade assignments thoroughly, and provide individualized student support may be diminished. This situation often creates a cycle where reduced performance leads to decreased self-confidence, potentially affecting both teaching quality and student outcomes.

    Financial stress has also been linked to increased levels of anxiety and burnout among teachers, directly impacting their perceived self-efficacy. Studies have shown that educators experiencing financial strain are more likely to report lower levels of job satisfaction and decreased confidence in their ability to meet professional expectations. This psychological burden can manifest in reduced classroom effectiveness and diminished student engagement.

    Perhaps most concerning is the growing trend of highly qualified educators leaving the profession entirely for better-paying opportunities in other sectors. This “brain drain” from education represents a significant loss of experienced professionals who have developed valuable teaching expertise. The exodus of talented educators not only affects current students but also reduces the pool of mentor teachers available to guide and support newer colleagues, potentially impacting the professional development of future educators.

    The correlation between inflation and teacher attrition rates has become increasingly apparent, with economic factors cited as a primary reason for leaving the profession. Research indicates that districts in areas with higher costs of living and significant inflation rates experience greater difficulty in both recruiting and retaining qualified teachers. This challenge is particularly acute in urban areas where housing costs and other living expenses have outpaced teacher salary increases.

    Corporate sectors, technology companies, and consulting firms have become attractive alternatives for educators seeking better compensation and work-life balance. These career transitions often offer significantly higher salaries, better benefits packages, and more sustainable working hours. The skills that make effective teachers, such as communication, organization, and problem-solving, are highly valued in these alternative career paths, making the transition both feasible and increasingly common.

    The cumulative effect of these factors presents a serious challenge to the education system’s sustainability. As experienced teachers leave the profession and prospective educators choose alternative career paths, schools face increasing difficulty in maintaining educational quality and consistency. This situation calls for systematic changes in how we value and compensate educators, recognizing that teacher self-efficacy is intrinsically linked to their financial security and professional well-being.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • Understanding and writing the Literature Review in Mba Projects

    Understanding and writing the Literature Review in Mba Projects

    Understanding the Topic: Even before starting to write a student should be having a full clarity about the research title, the objective of the study and the research problems.

    Searching for Relevant Literature: Students should search the academic libraries like Google Scholars, Research Gate, JSTOR or Scopus.

    Evaluating Sources: Once relevant sources are collected students should analyze, evaluate the objective, findings and limitation of those studies.

    Grouping of Literature: Collected literature should be grouped as per the subheading of the required studies.

    Write Critically: Literature review should be written critically and analytically relevant to the study .

    Identify the Research Gap: Students should analyze and find the research gap and specify where his study will add value to those gaps.

    Cite & Reference: Students should use formats like APA (7th edition) or Havard Referencing style while using in text citing.

    Source link

  • OfS’ understanding of the student interest requires improvement

    OfS’ understanding of the student interest requires improvement

    When the Office for Students’ (OfS) proposals for a new quality assessment system for England appeared in the inbox, I happened to be on a lunchbreak from delivering training at a students’ union.

    My own jaw had hit the floor several times during my initial skim of its 101 pages – and so to test the validity of my initial reactions, I attempted to explain, in good faith, the emerging system to the student leaders who had reappeared for the afternoon.

    Having explained that the regulator was hoping to provide students with a “clear view of the quality of teaching and learning” at the university, their first confusion was tied up in the idea that this was even possible in a university with 25,000 students and hundreds of degree courses.

    They’d assumed that some sort of dashboard might be produced that would help students differentiate between at least departments if not courses. When I explained that the “view” would largely be in the form of a single “medal” of Gold, Silver, Bronze or Requires improvement for the whole university, I was met with confusion.

    We’d spent some time before the break discussing the postgraduate student experience – including poor induction for international students, the lack of a policy on supervision for PGTs, and the isolation that PGRs had fed into the SU’s strategy exercise.

    When I explained that OfS was planning to introduce a PGT NSS in 2028 and then use that data in the TEF from 2030-31 – such that their university might not have the data taken into account until 2032-33 – I was met with derision. When I explained that PGRs may be incorporated from 2030–31 onwards, I was met with scorn.

    Keen to know how students might feed in, one officer asked how their views would be taken into account. I explained that as well as the NSS, the SU would have the option to create a written submission to provide contextual insight into the numbers. When one of them observed that “being honest in that will be a challenge given student numbers are falling and so is the SU’s funding”, the union’s voice coordinator (who’d been involved in the 2023 exercise) in the corner offered a wry smile.

    One of the officers – who’d had a rewarding time at the university pretty much despite their actual course – wanted to know if the system was going to tackle students like them not really feeling like they’d learned anything during their degree. Given the proposals’ intention to drop educational gain altogether, I moved on at this point. Young people have had enough of being let down.

    I’m not at home in my own home

    Back in February, you might recall that OfS published a summary of a programme of polling and focus groups that it had undertaken to understand what students wanted and needed from their higher education – and the extent to which they were getting it.

    At roughly the same time, it published proposals for a new initial Condition C5: Treating students fairly, to apply initially to newly registered providers, which drew on that research.

    As well as issues it had identified with things like contractual provisions, hidden costs and withdrawn offers, it was particularly concerned with the risk that students may take a decision about what and where to study based on false, misleading or exaggerated information.

    OfS’ own research into the Teaching Excellence Framework 2023 signals one of the culprits for that misleading. Polling by Savanta in April and May 2024, and follow-up focus groups with prospective undergraduates over the summer both showed that applicants consistently described TEF outcomes as too broad to be of real use for their specific course decisions.

    They wanted clarity about employability rates, continuation statistics, and job placements – but what they got instead was a single provider-wide badge. Many struggled to see meaningful differences between Gold and Silver, or to reconcile how radically different providers could both hold Gold.

    The evidence also showed that while a Gold award could reassure applicants, more than one in five students aware of their provider’s TEF rating disagreed that it was a fair reflection of their own experience. That credibility gap matters.

    If the TEF continues to offer a single label for an entire university, with data that are both dated and aggregated, there is a clear danger that students will once again be misled – this time not by hidden costs or unfair contracts, but by the regulatory tool that is supposed to help them make informed choices.

    You don’t know what I’m feeling

    Absolutely central to the TEF will remain results of the National Student Survey (NSS).

    OfS says that’s because “the NSS remains the only consistently collected, UK-wide dataset that directly captures students’ views on their teaching, learning, and academic support,” and because “its long-running use provides reliable benchmarked data which allows for meaningful comparison across providers and trends over time.”

    It stresses that the survey provides an important “direct line to student perceptions,” which balances outcomes data and adds depth to panel judgements. In other words, the NSS is positioned as an indispensable barometer of student experience in a system that otherwise leans heavily on outcomes.

    But set aside the fact that it surveys only those who make it to the final year of a full undergraduate degree. The NSS doesn’t ask whether students felt their course content was up to date with current scholarship and professional practice, or whether learning outcomes were coherent and built systematically across modules and years — both central expectations under B1 (Academic experience).

    It doesn’t check whether students received targeted support to close knowledge or skills gaps, or whether they were given clear help to avoid academic misconduct through essay planning, referencing, and understanding rules – requirements spelled out in the guidance to B2 (Resources, support and engagement). It also misses whether students were confident that staff were able to teach effectively online, and whether the learning environment – including hardware, software, internet reliability, and access to study spaces – actually enabled them to learn. Again, explicit in B2, but invisible in the survey.

    On assessment, the NSS asks about clarity, fairness, and usefulness of feedback, but it doesn’t cover whether assessment methods really tested what students had been taught, whether tasks felt valid for measuring the intended outcomes, or whether students believed their assessments prepared them for professional standards. Yet B4 (Assessment and awards) requires assessments to be valid and reliable, moderated, and robust against misconduct – areas NSS perceptions can’t evidence.

    I could go on. The survey provides snapshots of the learning experience but leaves out important perception checks on the coherence, currency, integrity, and fitness-for-purpose of teaching and learning, which the B conditions (and students) expect providers to secure.

    And crucially, OfS has chosen not to use the NSS questions on organisation and management in the future TEF at all. That’s despite its own 2025 press release highlighting it as one of the weakest-performing themes in the sector – just 78.5 per cent of students responded positively – and pointing out that disabled students in particular reported significantly worse experiences than their peers.

    OfS said then that “institutions across the sector could be doing more to ensure disabled students are getting the high quality higher education experience they are entitled to,” and noted that the gap between disabled and non-disabled students was growing in organisation and management. In other words, not only is the NSS not fit for purpose, OfS’ intended use of it isn’t either.

    I followed the voice, you gave to me

    In the 2023 iteration of the TEF, the independent student submission was supposed to be one of the most exciting innovations. It was billed as a crucial opportunity for providers’ students to tell their own story – not mediated through NSS data or provider spin, but directly and independently. In OfS’ words, the student submission provided “additional insights” that would strengthen the panel’s ability to judge whether teaching and learning really were excellent.

    In this consultation, OfS says it wants to “retain the option of student input,” but with tweaks. The headline change is that the student submission would no longer need to cover “student outcomes” – an area that SUs often struggled with given the technicalities of data and the lack of obvious levers for student involvement.

    On the surface, that looks like a kindness – but scratch beneath the surface, and it’s a red flag. Part of the point of Condition B2.2b is that providers must take all reasonable steps to ensure effective engagement with each cohort of students so that “those students succeed in and beyond higher education.”

    If students’ unions feel unable to comment on how the wider student experience enables (or obstructs) student success and progression, that’s not a reason to delete it from the student submission. It’s a sign that something is wrong with the way providers involve students in what’s done to understand and shape outcomes.

    The trouble is that the light touch response ignores the depth of feedback it has already commissioned and received. Both the IFF evaluation of TEF 2023 and OfS’ own survey of student contacts documented the serious problems that student reps and students’ unions faced.

    They said the submission window was far too short – dropping guidance in October, demanding a January deadline, colliding with elections, holidays, and strikes. They said the guidance was late, vague, inaccessible, and offered no examples. They said the template was too broad to be useful. They said the burden on small and under-resourced SUs was overwhelming, and even large ones had to divert staff time away from core activity.

    They described barriers to data access – patchy dashboards, GDPR excuses, lack of analytical support. They noted that almost a third didn’t feel fully free to say what they wanted, with some monitored by staff while writing. And they told OfS that the short, high-stakes process created self-censorship, strained relationships, and duplication without impact.

    The consultation documents brush most of that aside. Little in the proposals tackles the resourcing, timing, independence, or data access problems that students actually raised.

    I’m not at home in my own home

    OfS also proposes to commission “alternative forms of evidence” – like focus groups or online meetings – where students aren’t able to produce a written submission. The regulator’s claim is that this will reduce burden, increase consistency, and make it easier to secure independent student views.

    The focus group idea is especially odd. Student representatives’ main complaint wasn’t that they couldn’t find the words – it was that they lacked the time, resource, support, and independence to tell the truth. Running a one-off OfS focus group with a handful of students doesn’t solve that. It actively sidesteps the standard in B2 and the DAPs rules on embedding students in governance and representation structures.

    If a student body struggles to marshal the evidence and write the submission, the answer should be to ask whether the provider is genuinely complying with the regulatory conditions on student engagement. Farming the job out to OfS-run focus groups allows providers with weak student partnership arrangements to escape scrutiny – precisely the opposite of what the student submission was designed to do.

    The point is that the quality of a student submission is not just a “nice to have” extra insight for the TEF panel. It is, in itself, evidence of whether a provider is complying with Condition B2. It requires providers to take all reasonable steps to ensure effective engagement with each cohort of students, and says students should make an effective contribution to academic governance.

    If students can’t access data, don’t have the collective capacity to contribute, or are cowed into self-censorship, that is not just a TEF design flaw – it is B2 evidence of non-compliance. The fact that OfS has never linked student submission struggles to B2 is bizarre. Instead of drawing on the submissions as intelligence about engagement, the regulator has treated them as optional extras.

    The refusal to make that link is even stranger when compared to what came before. Under the old QAA Institutional Review process, the student written submission was long-established, resourced, and formative. SUs had months to prepare, could share drafts, and had the time and support to work with managers on solutions before a review team arrived. It meant students could be honest without the immediate risk of reputational harm, and providers had a chance to act before being judged.

    TEF 2023 was summative from the start, rushed and high-stakes, with no requirement on providers to demonstrate they had acted on feedback. The QAA model was designed with SUs and built around partnership – the TEF model was imposed by OfS and designed around panel efficiency. OfS has learned little from the feedback from those who submitted.

    But now I’ve gotta find my own

    While I’m on the subject of learning, we should finally consider how far the proposals have drifted from the lessons of Dame Shirley Pearce’s review. Back in 2019, her panel made a point of recording what students had said loud and clear – the lack of learning gain in TEF was a fundamental flaw.

    In fact, educational gain was the single most commonly requested addition to the framework, championed by students and their representatives who argued that without it, TEF risked reducing success to continuation and jobs.

    Students told the review they wanted a system that showed whether higher education was really developing their knowledge, skills, and personal growth. They wanted recognition of the confidence, resilience, and intellectual development that are as much the point of university as a payslip.

    Pearce’s panel agreed, recommending that Educational Gains should become a fourth formal aspect of TEF, encompassing both academic achievement and personal development. Crucially, the absence of a perfect national measure was not seen as a reason to ignore the issue. Providers, the panel said, should articulate their own ambitions and evidence of gain, in line with their mission, because failing to even try left a gaping hole at the heart of quality assessment.

    Fast forward to now, and OfS is proposing to abandon the concept entirely. To students and SUs who have been told for years that their views shape regulation, the move is a slap in the face. A regulator that once promised to capture the full richness of the student experience is now narrowing the lens to what can be benchmarked in spreadsheets. The result is a framework that tells students almost nothing about what they most want to know – whether their education will help them grow.

    You see the same lack of learning in the handling of extracurricular and co-curricular activity. For students, societies, volunteering, placements, and cocurricular opportunities are not optional extras but integral to how they build belonging, develop skills, and prepare for life beyond university. Access to these opportunities feature heavily in the Access and Participation Risk Register precisely because they matter to student success and because they’re a part of the educational offer in and of themselves.

    But in TEF 2023 OfS tied itself in knots over whether they “count” — at times allowing them in if narrowly framed as “educational”, at other times excluding them altogether. To students who know how much they learn outside of the lecture theatre, the distinction looked absurd. Now the killing off of educational gain excludes them all together.

    You should have listened

    Taken together, OfS has delivered a masterclass in demonstrating how little it has learned from students. As a result, the body that once promised to put student voice at the centre of regulation is in danger of constructing a TEF that is both incomplete and actively misleading.

    It’s a running theme – more evidence that OfS is not interested enough in genuinely empowering students. If students don’t know what they can, should, or could expect from their education – because the standards are vague, the metrics are aggregated, and the judgements are opaque – then their representatives won’t know either. And if their reps don’t know, their students’ union can’t effectively advocate for change.

    When the only judgements against standards that OfS is interested in come from OfS itself, delivered through a very narrow funnel of risk-based regulation, that funnel inevitably gets choked off through appeals to “reduced burden” and aggregated medals that tell students nothing meaningful about their actual course or experience. The result is a system that talks about student voice while systematically disempowering the very students it claims to serve.

    In the consultation, OfS says that it wants its new quality system to be recognised as compliant with the European Standards and Guidelines (ESG), which would in time allow it to seek membership of the European Quality Assurance Register (EQAR). That’s important for providers with international partnerships and recruitment ambitions, and for students given that ESG recognition underpins trust, mobility, and recognition across the European Higher Education Area.

    But OfS’ conditions don’t require co-design of the quality assurance framework itself, nor proof that student views shape outcomes. Its proposals expand student assessor roles in the TEF, but don’t guarantee systematic involvement in all external reviews or transparency of outcomes – both central to ESG. And as the ongoing QA-FIT project and ESU have argued, the next revision of the ESG is likely to push student engagement further, emphasising co-creation, culture, and demonstrable impact.

    If it does apply for EQAR recognition, our European peers will surely notice what English students already know – the gap between OfS’ rhetoric on student partnership and the reality of its actual understanding and actions is becoming impossible to ignore.

    When I told those student officers back on campus that their university would be spending £25,000 of their student fee income every time it has to take part in the exercise, their anger was palpable. When I added that according to the new OfS chair, Silver and Gold might enable higher fees, while Bronze or “Requires Improvement” might cap or further reduce their student numbers, they didn’t actually believe me.

    The student interest? Hardly.

    Source link

  • Understanding Value of Learning Fuels ChatGPT’s Study Mode

    Understanding Value of Learning Fuels ChatGPT’s Study Mode

    Photo illustration by Justin Morrison/Inside Higher Ed | SDI Productions/E+/Getty Images

    When classes resume this fall, college students will have access to yet another generative artificial intelligence tool marketed as a learning enhancement.

    But instead of generating immediate answers, OpenAI’s new Study Mode for ChatGPT acts more like a tutor, firing off questions, hints, self-reflection prompts and quizzes that are tailored to the user and informed by their past chat history. While traditional large language models have raised academic integrity concerns, Study Mode is intended to provide a more active learning experience. It mimics the type of Socratic dialogue students may expect to encounter in a lecture hall and challenges them to draw on information they already know to form their own nuanced analyses of complex questions.

    For example, when Inside Higher Ed asked the traditional version of ChatGPT which factors caused the United States Civil War, it immediately responded that the war had “several major causes, most of which revolved around slavery, states’ rights, and economic differences,” and gave more details about each before producing a five-paragraph essay on the topic. Asking Study Mode the same question, however, prompted it to give a brief overview before asking this question: “Would you say the war was fought because of slavery, or about something else like states’ rights or economics? There’s been debate over this, so I’d love to hear your view first. Then I’ll show you how historians analyze it today.”

    Study Mode is similar to the Learning Mode that Anthropic launched for its chat bot Claude for Education back in April and the Guided Learning version of Gemini that Google unveiled Wednesday. OpenAI officials say they hope Study Mode will “support deeper learning” among college students.

    While teaching and learning experts don’t believe such tools can replace the value faculty relationships and expertise offer students, Study Mode’s release highlights generative AI’s evolving possibilities—and limitations—as a teaching and learning aid. For students who choose to use it instead of asking a traditional LLM for answers, Study Mode offers an on-demand alternative to a human tutor, unbound by scheduling conflicts, payment or feedback delays.

    But in an economy where generative AI’s ability to gather and regurgitate information is threatening the future of entry-level office jobs, students will need to understand what they’re trying to get out of their college coursework if they want Study Mode to deliver on its promise to foster deeper learning.

    “Over all, it does represent a significant departure from how people are normally encouraged to use generative AI, given that the value proposition of this technology is largely couched in the concept of efficiency, speed and offloading things that are not as important,” said Trey Conatser, assistant provost for teaching and learning at the University of Kentucky and director of UK’s Center for the Enhancement of Teaching and Learning. “But Study Mode actually asks us to stick with it for it to work, take some time to sit with the friction of learning and be patient with the practice and feedback.”

    Critical Thinking in Age of AI

    The release of Study Mode comes nearly three years after the debut of ChatGPT—and the many other large language models that have followed—set off ongoing concerns among educators that college students’ increased reliance on traditional LLMs to write papers or solve math problems may erode their critical thinking skills.

    There’s not yet a consensus on if generative AI helps or hurts critical thinking, but at least two studies published in the last year found that adults, including college students, who use LLMs to research and learn about a subject exhibited weaker reasoning and analytical skills. And those may be the exact types of skills the roughly two-thirds who are already using generative AI tools like ChatGPT will need to make Study Mode reach its full potential.

    While ChatGPT’s Study Mode is programmed to query students rather than generate answers, the users themselves also need to be prepared to push the LLM to ask deeper, more nuanced questions. At least, that was Conatser’s experience when he tested out Study Mode over the weekend and asked it to help him study for a hypothetical quiz on Romantic poets from 1780 to 1820.

    First, the LLM asked him more about his education level and the focus of the quiz. He told it that he’s a college junior whose quiz will focus on historical context and specified that he wanted help studying William Wordsworth’s poem “The Prelude.”

    The following conversation, which Conatser shared with Inside Higher Ed, ensued:

    Study Mode: Wordsworth described “The Prelude” as “a poem on the growth of my own mind.” What exact subtitle did he eventually attach to the poem that captures this idea?

    (Give it a try, and we’ll check your answer before moving on.)

    Conatser: That’s not so much what I will need to answer. It will be open book, but I will be asked to provide some interpretations to historical context and literary traditions.

    After some more dialogue asking Study Mode to focus on historical context, it generated the following question:

    Study Mode: “Bliss was it in that dawn to be alive …” Based on that word, “bliss,” what general emotion do you think he felt when he first encountered the early French Revolution—excitement, fear or apathy?

    Conatser: That seems like an obvious question

    Study Mode: Understood—let’s turn up the difficulty. In “The Prelude” (Book X), Wordsworth writes of “how in my heart a dreary weight / Fell, and I turned away …”—a moment that marks his break with the revolution he once celebrated.

    Question (one step): Which single, headline-making event in France during 1793 does this passage point to, and why did that event feel like a betrayal of the revolution’s original ideals for many sympathetic British observers?

    Although it took some prodding to get the LLM to ask the type of questions Conatser wanted to see, “I’m still in the driver’s seat with Study Mode,” he said. “That’s where students are going to need to be when they’re using it.”

    Knowing when to push back against Study Mode when it’s not asking the types of questions that will help students master material will be key to helping them get the most out of its ability to foster critical thinking, Conatser said.

    “If someone is going to be totally passive with Study Mode, it may not drive them to where they need to go and they’ll still need to be thinking about how they’re supposed to be studying, what kind of assessment they’re leading up to and what kinds of skills will be more important for that assessment,” he said. “If you don’t know what you don’t know, then you won’t be very effective at using these tools even if they have the types of guardrails that Study Mode has.”

    And if a student isn’t motivated to use Study Mode to its full potential, they can prompt it to generate complete answers, just like a traditional LLM, according to Christian Carmody, a senior researcher at the Research Institute for Learning and Development, who also tested it out this week.

    “For a current events assignment, I prompted Study Mode right away and told it, ‘Before we engage, I do not want your help with this or [to] encourage me to think through this. I do this on my own another time. I really just want the answers,’” Carmody recalled. “It did exactly that.”

    The ability for students to easily manipulate Study Mode could add more pressure to colleges and universities that are facing growing skepticism from students about the value of degrees in the age of AI.

    “Students should be able to think about why learning is valuable to them and why they should be able to engage with material in a way that’s challenging and force deep thinking,” he said. “Until a student has that mindset, I’m not confident that they are going to use this study and learning tool in the way it’s intended to be used.”

    Source link