Tag: problem

  • Efforts to build belonging may get the problem the wrong way around

    Efforts to build belonging may get the problem the wrong way around

    Back in January 2024, John Blake, the now-departing Office for Students’ Director for Fair Access and Participation, was talking about the future of access and participation plans.

    Alongside announcing additional groups of students who might be at risk – service children, young carers, prisoners, commuter students, parents, and Jewish students – noted that “sense of belonging” had appeared in lots of evidence reviews as relevant to many of the risks.

    I’d urge providers to think hard about practical, enduringly impactful work they might do around that idea as part of new APPs.

    Now that all the approved APPs are in, I’ve had a look at what providers are actually proposing.

    I’ve reviewed approved access and participation plans from across the sector in England, extracting every mention of belonging as a strategic priority, every identification of belonging deficits as a risk, and every intervention designed to address them.

    The result is a picture of how the sector understands and responds to belonging challenges. The pattern I’ve found is so consistent across provider types, mission groups, and geographical locations that it ought to amount to a sector-wide consensus about how to “do” belonging.

    The problem is that that consensus appears to be fundamentally at odds with what research tells us about how belonging actually works.

    The deficit model at scale

    Nearly every university identifies that specific disadvantaged groups – Black students, mature students, care-experienced students, disabled students, commuter students, students from IMD Quintile 1 – report lower belonging scores than their peers.

    They then design targeted interventions to address this deficit – peer mentoring schemes for Black students, mature student networks and “mingles”, care-experienced student buddy schemes, disability-specific student groups, commuter-specific transition support.

    The interventions are pretty homogeneous. Birkbeck is running “sustained programmes of Black Unity Events” to “provide a space for Black students to authentically be themselves, form connections and friendships”. Leeds Arts has created “My/Your/Our Space” – a “safer space and community relevant to background” specifically for students of minoritised ethnicities. Northampton has developed a “Black Excellence Programme” designed “to empower Black undergraduate students early on in their transition to level 4 courses with the confidence, sense of belonging and mattering to become resilient leaders and role models”.

    Greenwich has implemented the “Living Black at University Project to support BAME students develop a sense of belonging and community outside of the classroom”. Liverpool John Moores is “developing a Black students peer network via JMSU, focusing on creating a black student community”.

    It’s not just ethnicity. For mature students, East Anglia will “continue specific co-created sense of belonging opportunities for groups of students to meet socially” through a mature student network. Leeds is expanding a “middle ground network pilot” – “co-creating spaces (virtual, physical) for mature and ‘younger mature’ students to help develop a greater sense of belonging”. Bristol is implementing “enhanced mature student community building through mingles, student advocate-led events, and an extended mature student welcome and transition programme”.

    The pattern is almost identical across every characteristic. Care-experienced students get targeted belonging interventions at York (“Achieve HE program aims for increased sense of belonging socially and academically”), Durham (“dedicated mature learners coordinator” aims for “increased sense of belonging”), and Portsmouth (specialist support for “enhanced sense of belonging”). Disabled students get belonging-focused societies and groups. Commuter students get special spaces. And so on.

    Nearly every institution frames belonging as something that specific groups lack, and that requires special intervention to remedy. The language is consistent – students from disadvantaged backgrounds “may struggle to feel they fit in”, “can lack a sense of belonging at university”, “feel disconnected from their academics/tutors and/or fellow students”, and “feel isolated or unsupported from the moment they arrived at University”.

    The Wisconsin problem

    I’ve talked about this before here, but about a decade ago, there was a problem at the University of Wisconsin-Madison. Across a collection of STEM courses, there was a significant achievement gap between marginalised groups (all religious minorities and non-White students) and privileged students.

    Psychology professor Markus Brauer had an idea based on his previous research on social norms messaging – communicating to people that most of their peers hold certain pro-social attitudes or engage in certain pro-social behaviours.

    He started by trying out posters, then showed two groups of students videos. One saw an off-the-shelf explanation of bias and micro-aggressions. The other saw lots of students describing the day-to-day benefits of diversity – a “social norms” video revealing that 87 per cent of students actively supported diversity and inclusion.

    The latter video had a strong, significant, positive effect on inclusive climate scores for students from marginalised backgrounds. They reported that their peers behaved more inclusively and treated them with more respect.

    But by the end of the semester, the achievement gap was completely eliminated. Not through remedial support for struggling students, not through special programmes for disadvantaged groups, but through changing what everyone believed about what everyone else valued.

    The Wisconsin intervention didn’t create a “Black Student Success Program”, didn’t offer “enhanced support for marginalised students”, and didn’t build “safe spaces” for specific groups or train “allies” to support disadvantaged students. It told all students the truth about what their peers already valued – and behaviour changed dramatically.

    The research found that while most students genuinely valued diversity, they incorrectly believed their peers didn’t share these values, and the misperception created a false social norm that discouraged inclusive behaviour.

    Students who might naturally reach out across cultural boundaries held back, thinking they’d be the odd ones out. When you correct that misperception – when you say “actually, 87 per cent of your peers actively support diversity” – you transform intervention from an exceptional act requiring special training into standard behaviour.

    But most elements of the dominant APP approach do the opposite:

    • Wisconsin said: “Most students already value diversity – here’s proof”. UK universities say: “We need to create spaces where Black students can feel they belong”
    • Wisconsin said: “Inclusive behaviour is normal here”. UK universities say: “We’ll train mature students how to access support networks”
    • Wisconsin said: “Let’s change what everyone thinks everyone else believes”. UK universities say: “Let’s give disadvantaged groups the resources they lack”

    The Wisconsin research explicitly warns against the dominant approach. As the researchers note:

    “…empowering marginalised groups through special initiatives can paradoxically highlight their ‘different’ status, reinforcing the hierarchies we’re trying to dismantle.

    Power and perception

    To understand why the targeted approach fails, we need to examine how power operates in university settings. Brauer’s research identifies several key dynamics.

    Power shapes perception – those with social power tend to stereotype less powerful groups while seeing their own group as diverse individuals. Power also affects behaviour – powerful individuals act more freely, take bigger risks, and break social rules more often. In seminars, confident students dominate discussions while others remain silent – not because they lack ideas, but because power dynamics constrain their behaviour.

    Most importantly, power creates attribution biases. When powerful people succeed, we attribute it to their personal qualities. When less powerful people fail, we blame their circumstances. This creates self-fulfilling prophecies that reinforce existing hierarchies.

    The dynamics explain why traditional EDI initiatives often fail. Telling powerful groups they’re biased can actually reinforce stereotyping by making them defensive. Meanwhile, “empowering” marginalised groups through special initiatives paradoxically highlights their “different” status, reinforcing the hierarchies we’re trying to dismantle.

    For Brauer, the students don’t lack belonging. The institution lacks inclusive structures that make belonging feel normal. There’s a profound difference between “you need help fitting in because you’re different” and “this is how we all do things here – welcome to the crew.”

    Ticking the boxes

    So why are universities doing this? Partly because OfS asked them to think about belonging, partly because APP spend has to be “on” the disadvantaged groups, and partly because “we’re doing a thing” makes sense in a compliance environment.

    It’s easily documented, measurable by group, defensible to regulators, and demonstrably “doing something”. The Wisconsin approach would be much harder to report in an APP. How do you document “we told everyone that most students already value diversity”? Which “target group” got the “intervention”? What’s the “spend per head”? How do you prove that changing perceived social norms reduced the achievement gap when you didn’t target any specific demographic?

    As such, the APP architecture itself pushes providers toward deficit-model interventions. You can’t write “we’re going to make peer support universal and student-led because that’s just how induction works here”, because that doesn’t read as an access and participation intervention.

    You can’t write “we’re going to survey students and publicize that 78 per cent actively welcome international students”. That doesn’t look like you’re spending money on disadvantaged groups, or map onto the OfS risk register.

    The result is targeted compliance theatre that the evidence suggests will entrench the hierarchies it claims to dismantle.

    To be fair, universities are also responding to a genuine perception that students from disadvantaged backgrounds need additional support to succeed. And they’re not wrong about the support needs – they may be wrong about the delivery mechanism.

    When continuation, completion, and attainment gaps persist for Black students, care-experienced students, and students from deprived areas, the institutional instinct is to create support structures for those specific groups – it feels like the responsible, caring response. But in practice, they are initiatives that are characteristic first, student second. You need special help because you’re different.

    What would actually work

    What would an alternative approach entail? The research suggests five key departures from current practice.

    First is normalising rather than targeting. Instead of creating programmes that make intervention seem exceptional, universities would need to reveal what’s already normal. The Wisconsin approach costs almost nothing – a video, an email, some posters showing that 87 per cent of students actively support diversity. But it requires actually surveying students to discover (they probably would) that most already hold pro-social attitudes, then making that visible. “We surveyed 2,000 students here – 78 per cent actively welcome international students” changes the perceived norm without targeting anyone.

    Universal design rather than special fixes also matters. This means asking different questions. Not “what enhanced personal tutoring do disadvantaged groups need?” but “what if the default tutorial system worked properly for everyone?” Not “what mature student networks should we create?” but “what if study groups and peer support were structured to include all ages and backgrounds by default?” Not “what transition support do care-experienced students need?” but “what if induction assumed zero prior knowledge and no family support for everyone?”

    This wouldn’t mean removing targeted financial support or specialist services (hardship funds, mental health provision, disability services). Those remain separate. It’s about ensuring the basic architecture of belonging – induction, peer support, community-building – works for everyone by default rather than requiring special programmes for specific groups.

    Student leadership of essential functions matters too. European models show students running welcome week, managing housing cooperatives, delivering careers support, organizing social activities – not as add-ons but as how the institution functions. Belonging becomes structural rather than programmatic.

    The challenge there is that UK universities have spent decades professionalizing student engagement – student experience teams, transition coordinators, wellbeing advisors, residence life programmes, delivered by professionals, for students, rather than by students, for each other. Reversing this requires actually giving functions back to students, with appropriate support structures and (dare we say) compensation for significant roles.

    But most important is working on the advantaged. If you want Black students to feel they belong, the Wisconsin research suggests you work with white students to change what they believe about what their peers value. The achievement gap closed partly because white students changed their behaviour.

    If you want mature students to feel integrated, you create structures where all students work together on meaningful projects, where collaboration across demographics is normal and expected. If you want care-experienced students to feel they matter, you create environments where all students contribute to running their community, where everyone assumes they’ll both need help and provide it to others.

    Little of this appears in approved APPs, which at best read as well-meaning, and at worst like victim blaming. Whether alternatives could appear in a future APP iteration – whether the architecture of the APP process would even recognise these as access and participation interventions – is an open question.

    What happens now

    The challenge both for OfS and for universities is significant. Every APP currently includes detailed commitments to targeted belonging interventions, complete with evaluation frameworks and expected outcomes. Universities have staff, allocated budgets, designed programmes, and set objectives based on the deficit model approach. Rowing back isn’t straightforward.

    But the evidence is increasingly clear that the approach, however well-intentioned, is unlikely to work – and may indeed backfire. More fundamentally, the sector needs to grapple with some uncomfortable questions. If most UK students already hold pro-social and pro-diversity attitudes (and research suggests they probably do), why don’t they act on them? What structural barriers prevent students from forming friendships and study groups across demographic boundaries?

    John Blake asked for “practical, enduringly impactful work” around belonging. What universities have delivered is well-intentioned, carefully designed, and probably counterproductive.

    The good news is that what actually works – changing social norms, creating universal structures, enabling student leadership – is arguably easier and cheaper than what the sector is intending. The bad news is that it requires the sector to admit it’s been thinking about the problem the wrong way around.

    Source link

  • Algorithms aren’t the problem. It’s the classification system they support

    Algorithms aren’t the problem. It’s the classification system they support

    The Office for Students (OfS) has published its annual analysis of sector-level degree classifications over time, and alongside it a report on Bachelors’ degree classification algorithms.

    The former is of the style (and with the faults) we’ve seen before. The latter is the controversial bit, both to the extent to which parts of it represent a “new” set of regulatory requirements, and a “new” set of rules over what universities can and can’t do when calculating degree results.

    Elsewhere on the site my colleague David Kernohan tackles the regulation issue – the upshots of the “guidance” on the algorithms, including what it will expect universities to do both to algorithms in use now, and if a provider ever decides to revise them.

    Here I’m looking in detail at its judgements over two practices. Universities are, to all intents and purposes, being banned from any system which discounts credits with the lowest marks – a practice which the regulator says makes it difficult to demonstrate that awards reflect achievement.

    It’s also ruling out “best of” algorithm approaches – any universities that determine degree class by running multiple algorithms and selecting the one that gives the highest result will also have to cease doing so. Anyone still using these approaches by 31 July 2026 has to report itself to OfS.

    Powers and process do matter, as do questions as to whether this is new regulation, or merely a practical interpretation of existing rules. But here I’m concerned with the principle. Has OfS got a point? Do systems such as those described above amount to misleading people who look at degree results over what a student has achieved?

    More, not less

    A few months ago now on Radio 4’s More or Less, I was asked how Covid had impacted university students’ attainment. On a show driven by data, I was wary about admitting that as a whole, I think it would be fair to say that UK HE isn’t really sure.

    When in-person everything was cancelled back in 2020, universities scrambled to implement “no detriment” policies that promised students wouldn’t be disadvantaged by the disruption.

    Those policies took various forms – some guaranteed that classifications couldn’t fall below students’ pre-pandemic trajectory, others allowed students to select their best marks, and some excluded affected modules entirely.

    By 2021, more than a third of graduates were receiving first-class honours, compared to around 16 per cent a decade earlier – with ministers and OfS on the march over the risk of “baking in” the grade inflation.

    I found that pressure troubling at the time. It seemed to me that for a variety of reasons, providers may have, as a result of the pandemic, been confronting a range of faults with degree algorithms – for the students, courses and providers that we have now, it was the old algorithms that were the problem.

    But the other interesting thing for me was what those “safety net” policies revealed about the astonishing diversity of practice across the sector when it comes to working out the degree classification.

    For all of the comparison work done – including, in England, official metrics on the Access and Participation Dashboard over disparities in “good honours” awarding – I was wary about admitting to Radio 4’s listeners that it’s not just differences in teaching, assessment and curriculum that can drive someone getting a First here and a 2:2 up the road.

    When in-person teaching returned in 2022 and 2023, the question became what “returning to normal” actually meant. Many – under regulatory pressure not to “bake in” grade inflation – removed explicit no-detriment policies, and the proportion of firsts and upper seconds did ease slightly.

    But in many providers, many of the flexibilities introduced during Covid – around best-mark selection, module exclusions and borderline consideration – had made explicit and legitimate what was already implicit in many institutional frameworks. And many were kept.

    Now, in England, OfS is to all intents and purposes banning a couple of the key approaches that were deployed during Covid. For a sector that prizes its autonomy above almost everything else, that’ll trigger alarm.

    But a wider look at how universities actually calculate degree classifications reveals something – the current system embodies fundamentally different philosophies about what a degree represents, are philosophies that produce systematically different outcomes for identical student performance, and are philosophies that should not be written off lightly.

    What we found

    Building on David Allen’s exercise seven years ago, a couple of weeks ago I examined the publicly available degree classification regulations for more than 150 UK universities, trawling through academic handbooks, quality assurance documents and regulatory frameworks.

    The shock for the Radio 4 listener on the Clapham Omnibus would be that there is no standardised national system with minor variations, but there is a patchwork of fundamentally different approaches to calculating the same qualification.

    Almost every university claims to use the same framework for UG quals – the Quality Assurance Agency benchmarks, the Framework for Higher Education Qualifications and standard grade boundaries of 70 for a first, 60 for a 2:1, 50 for a 2:2 and 40 for a third. But underneath what looks like consistency there’s extraordinary diversity in how marks are then combined into final classifications.

    The variations cluster around a major divide. Some universities – predominantly but not exclusively in the Russell Group – operate on the principle that a degree classification should reflect the totality of your assessed work at higher levels. Every module (at least at Level 5 and 6) counts, every mark matters, and your classification is the weighted average of everything you did.

    Other universities – predominantly post-1992 institutions but with significant exceptions – take a different view. They appear to argue that a degree classification should represent your actual capability, demonstrated through your best work.

    Students encounter setbacks, personal difficulties and topics that don’t suit their strengths. Assessment should be about demonstrating competence, not punishing every misstep along a three-year journey.

    Neither philosophy is obviously wrong. The first prioritises consistency and comprehensiveness. The second prioritises fairness and recognition that learning isn’t linear. But they produce systematically different outcomes, and the current system does allow both to operate under the guise of a unified national framework.

    Five features that create flexibility

    Five structural features appear repeatedly across university algorithms, each pushing outcomes in one direction.

    1. Best-credit selection

    This first one has become widespread, particularly outside the Russell Group. Rather than using all module marks, many universities allow students to drop their worst performances.

    One uses the best 105 credits out of 120 at each of Levels 5 and 6. Another discards the lowest 20 credits automatically. A third takes only the best 90 credits at each level. Several others use the best 100 credits at each stage.

    The rationale is obvious – why should one difficult module or one difficult semester define an entire degree?

    But the consequence is equally obvious. A student who scores 75-75-75-75-55-55 across six modules averages 68.3 per cent. At universities where everything counts, that’s a 2:1. At universities using best-credit selection that drops the two 55s, it averages 75 – a clear first.

    Best-credit selection is the majority position among post-92s, but virtually absent at Russell Group universities. OfS is now pretty much banning this practice.

    The case against rests on B4.2(c) (academic regulations must be “designed to ensure” awards are credible) and B4.4(e) (credible means awards “reflect students’ knowledge and skills”). Discounting credits with lowest marks “excludes part of a student’s assessed achievement” and so:

    …may result in a student receiving a class of degree that overlooks material evidence of their performance against the full learning outcomes for the course.

    2. Multiple calculation routes

    These take that principle further. Several universities calculate your degree multiple ways and award whichever result is better. One runs two complete calculations – using only your best 100 credits at Level 6, or taking your best 100 at both levels with 20:80 weighting. You get whichever is higher.

    Another offers three complete routes – unweighted mean, weighted mean and a profile-based method. Students receive the highest classification any method produces.

    For those holding onto their “standards”, this sort of thing is mathematically guaranteed to inflate outcomes. You’re measuring the best possible interpretation of what students achieved, not what they achieved every time. As a result, comparison across institutions becomes meaningless. Again, this is now pretty much being banned.

    This time, the case against is that:

    …the classification awarded should not simply be the most favourable result, but the result that most accurately reflects the student’s level of achievement against the learning outcomes.

    3. Borderline uplift rules

    What happens on the cusps? Borderline uplift rules create all sorts of discretion around the theoretical boundaries.

    One university automatically uplifts students to the higher class if two-thirds of their final-stage credits fall within that band, even if their overall average sits below the threshold. Another operates a 0.5 percentage point automatic uplift zone. Several maintain 2.0 percentage point consideration zones where students can be promoted if profile criteria are met.

    If 10 per cent of students cluster around borderlines and half are uplifted, that’s a five per cent boost to top grades at each boundary – the cumulative effect is substantial.

    One small and specialist plays the counterfactual – when it gained degree-awarding powers, it explicitly removed all discretionary borderline uplift. The boundaries are fixed – and it argues this is more honest than trying to maintain discretion that inevitably becomes inconsistent.

    OfS could argue borderline uplift breaches B4.2(b)’s requirement that assessments be “reliable” – defined as requiring “consistency as between students.”

    When two students with 69.4% overall averages receive different classifications (one uplifted to First, one remaining 2:1) based on mark distribution patterns or examination board discretion, the system produces inconsistent outcomes for identical demonstrated performance.

    But OfS avoids this argument, likely because it would directly challenge decades of established discretion on borderlines – a core feature of the existing system. Eliminating all discretion would conflict with professional academic judgment practices that the sector considers fundamental, and OfS has chosen not to pick that fight.

    4. Exit acceleration

    Heavy final-year weighting amplifies improvement while minimising early difficulties. Where deployed, the near-universal pattern is now 25 to 30 per cent for Level 5 and 70 to 75 per cent for Level 6. Some institutions weight even more heavily, with year three counting for 60 per cent of the final mark.

    A student who averages 55 in year two and 72 in year three gets 67.2 overall with typical 30:70 weighting – a 2:1. A student who averages 72 in year two and 55 in year three gets 59.9 – just short of a 2:1.

    The magnitude of change is identical – it’s just that the direction differs. The system structurally rewards late bloomers and penalises any early starters who plateau.

    OfS could argue that 75 per cent final-year weighting breaches B4.2(a)’s requirement for “appropriately comprehensive” assessment. B4 Guidance 335M warns that assessment “focusing only on material taught at the end of a long course… is unlikely to provide a valid assessment of that course,” and heavy (though not exclusive) final-year emphasis arguably extends this principle – if the course’s subject matter is taught across three years, does minimizing assessment of two-thirds of that teaching constitute comprehensive evaluation?

    But OfS doesn’t make this argument either, likely because year weighting is explicit in published regulations, often driven by PSRB requirements, and represents settled institutional choices rather than recent innovations. Challenging it would mean questioning established pedagogical frameworks rather than targeting post-hoc changes that might mask grade inflation.

    5. First-year exclusion

    Finally, with a handful of institutional and PSRB exceptions, the first-year-not-counting is now pretty much universal, removing what used to be the bottom tail of performance distributions.

    While this is now so standard it seems natural, it represents a significant structural change from 20 to 30 years ago. You can score 40s across the board in first year and still graduate with a first if you score 70-plus in years two and three.

    Combine it with other features, and the interaction effects compound. At universities using best 105 credits at each of Levels 5 and 6 with 30:70 weighting, only 210 of 360 total credits – 58 per cent – actually contribute to your classification. And so on.

    OfS could argue first-year exclusion breaches comprehensiveness requirements – when combined with best-credit selection, only 210 of 360 total credits (58%) might count toward classification. But OfS explicitly notes this practice is now “pretty much universal” with only “a handful of institutional and PSRB exceptions,” treating it as neutral accepted practice rather than a compliance concern.

    Targeting something this deeply embedded across the sector would face overwhelming institutional autonomy defenses and would effectively require the sector to reinstate a practice it collectively abandoned over the past two decades.

    OfS’ strategy is to focus regulatory pressure on recent adoptions of “inherently inflationary” practices rather than challenging longstanding sector-wide norms.

    Institution type

    Russell Group universities generally operate on the totality-of-work philosophy. Research-intensives typically employ single calculation methods, count all credits and maintain narrow borderline zones.

    But there are exceptions. One I’ve seen has automatic borderline uplift that’s more generous than many post-92s. Another’s 2.0 percentage point borderline zone adds substantial flexibility. If anything, the pattern isn’t uniformity of rigour – it’s uniformity of philosophy.

    One London university has a marks-counting scheme rather than a weighted average – what some would say is the most “rigorous” system in England. And two others – you can guess who – don’t fit this analysis at all, with subject-specific systems and no university-wide algorithms.

    Post-1992s systematically deploy multiple flexibility features. Best-credit selection appears at roughly 70 per cent of post-92s. Multiple calculation routes appear at around 40 per cent of post-92s versus virtually zero per cent at research-intensive institutions. Several post-92s have introduced new, more flexible classification algorithms in the past five years, while Russell Group frameworks have been substantially stable for a decade or more.

    This difference reflects real pressures. Post-92s face acute scrutiny on student outcomes from league tables, OfS monitoring and recruitment competition, and disproportionately serve students from disadvantaged backgrounds with lower prior attainment.

    From one perspective, flexibility is a cynical response to metrics pressure. From another, it’s recognition that their students face different challenges. Both perspectives contain truth.

    Meanwhile, Scottish universities present a different model entirely, using GPA-based calculations across SCQF Levels 9 and 10 within four-year degree structures.

    The Scottish system is more internally standardised than the English system, but the two are fundamentally incompatible. As OfS attempts to mandate English standardisation, Scottish universities will surely refuse, citing devolved education powers.

    London is a city with maximum algorithmic diversity within minimum geographic distance. Major London universities use radically different calculation systems despite competing for similar students. A student with identical marks might receive a 2:1 at one, a first at another and a first with higher average at a third, purely over algorithmic differences.

    What the algorithm can’t tell you

    The “five features” capture most of the systematic variation between institutional algorithms. But they’re not the whole story.

    First, they measure the mechanics of aggregation, not the standards of marking. A 65 per cent essay at one university may represent genuinely different work from a 65 per cent at another. External examining is meant to moderate this, but the system depends heavily on trust and professional judgment. Algorithmic variation compounds whatever underlying marking variation exists – but marking standards themselves remain largely opaque.

    Second, several important rules fall outside the five-feature framework but still create significant variation. Compensation and condonement rules – how universities handle failed modules – differ substantially. Some allow up to 30 credits of condoned failure while still classifying for honours. Others exclude students from honours classification with any substantial failure, regardless of their other marks.

    Compulsory module rules also cut across the best-credit philosophy. Many universities mandate that dissertations or major projects must count toward classification even if they’re not among a student’s best marks. Others allow them to be dropped. A student who performs poorly on their dissertation but excellently elsewhere will face radically different outcomes depending on these rules.

    In a world where huge numbers of students now have radically less module choice than they did just a few years ago as a result of cuts, they would have reason to feel doubly aggrieved if modules they never wanted to take in the first place will now count when they didn’t last week.

    Several universities use explicit credit-volume requirements at each classification threshold. A student might need not just a 60 per cent average for a 2:1, but also at least 180 credits at 60 per cent or above, including specific volumes from the final year. This builds dual criteria into the system – you need both the average and the profile. It’s philosophically distinct from borderline uplift, which operates after the primary calculation.

    And finally, treatment of reassessed work varies. Nearly all universities cap resit marks at the pass threshold, but some exclude capped marks from “best credit” calculations while others include them. For students who fail and recover, this determines whether they can still achieve high classifications or are effectively capped at lower bands regardless of their other performance.

    The point isn’t so much that I (or OfS) have missed the “real” drivers of variation – the five features genuinely are the major structural mechanisms. But the system’s complexity runs deeper than any five-point list can capture. When we layer compensation rules onto best-credit selection, compulsory modules onto multiple calculation routes, and volume requirements onto borderline uplift, the number of possible institutional configurations runs into the thousands.

    The transparency problem

    Every day’s a school day at Wonkhe, but what has been striking for me is quite how difficult the information has been to access and compare. Some institutions publish comprehensive regulations as dense PDF documents. Others use modular web-based regulations across multiple pages. Some bury details in programme specifications. Several have no easily locatable public explanation at all.

    UUK’s position on this, I’d suggest, is a something of a stretch:

    University policies are now much more transparent to students. Universities are explaining how they calculate the classification of awards, what the different degree classifications mean and how external examiners ensure consistency between institutions.

    Publication cycles vary unpredictably, cohort applicability is often ambiguous, and cross-referencing between regulations, programme specifications and external requirements adds layers upon layers of complexity. The result is that meaningful comparison is effectively impossible for anyone outside the quality assurance sector.

    This opacity matters because it masks that non-comparability problem. When an employer sees “2:1, BA in History” on a CV, they have no way of knowing whether this candidate’s university used all marks or selected the best 100 credits, whether multiple calculation routes were available or how heavily final-year work was weighted. The classification looks identical regardless. That makes it more, not less, likely that they’ll just go on prejudices and league tables – regardless of the TEF medal.

    We can estimate the impact conservatively. Year one exclusion removes perhaps 10 to 15 per cent of the performance distribution. Best-credit selection removes another five to 10 per cent. Heavy final-year weighting amplifies improvement trajectories. Multiple calculation routes guarantee some students shift up a boundary. Borderline rules uplift perhaps three to five per cent of the cohort at each threshold.

    Stack these together and you could shift perhaps 15 to 25 per cent of students up one classification band compared to a system that counted everything equally with single-method calculation and no borderline flexibility. Degree classifications are measuring as much about institutional algorithm choices as about student learning or teaching quality.

    Yes, but

    When universities defend these features, the justifications are individually compelling. Best-credit selection rewards students’ strongest work rather than penalising every difficult moment. Multiple routes remove arbitrary disadvantage. Borderline uplift reflects that the difference between 69.4 and 69.6 per cent is statistically meaningless. Final-year emphasis recognises that learning develops over time. First-year exclusion creates space for genuine learning without constant pressure.

    None of these arguments is obviously wrong. Each reflects defensible beliefs about what education is for. The problem is that they’re not universal beliefs, and the current system allows multiple philosophies to coexist under a facade of equivalence.

    Post-92s add an equity dimension – their flexibility helps students from disadvantaged backgrounds who face greater obstacles. If standardisation forces them to adopt strict algorithms, degree outcomes will decline at institutions serving the most disadvantaged students. But did students really learn less, or attain to a “lower” standard?

    The counterargument is that if the algorithm itself makes classifications structurally easier to achieve, you haven’t promoted equity – you’ve devalued the qualification. And without the sort of smart, skills and competencies based transcripts that most of our pass/fail cousins across Europe adopt, UK students end up choosing between a rock and a hard place – if only they were conscious of that choice.

    The other thing that strikes me is that the arguments I made in December 2020 for “baking in” grade inflation haven’t gone away just because the pandemic has. If anything, the case for flexibility has strengthened as the cost of living crisis, inadequate maintenance support and deteriorating student mental health create circumstances that affect performance through no fault of students’ own.

    Students are working longer hours in paid employment to afford rent and food, living in unsuitable accommodation, caring for family members, and managing mental health conditions at record levels. The universities that retained pandemic-era flexibilities – best-credit selection, generous borderline rules, multiple calculation routes – aren’t being cynical about grade inflation. They’re recognising that their students disproportionately face these obstacles, and that a “totality-of-work” philosophy systematically penalises students for circumstances beyond their control rather than assessing what they’re actually capable of achieving.

    The philosophical question remains – should a degree classification reflect every difficult moment across three years, or should it represent genuine capability demonstrated when circumstances allow? Universities serving disadvantaged students have answered that question one way – research-intensive universities serving advantaged students have answered it another.

    OfS’s intervention threatens to impose the latter philosophy sector-wide, eliminating the flexibility that helps students from disadvantaged backgrounds show their “best selves” rather than punishing them for structural inequalities that affect their week-to-week performance.

    Now what

    As such, a regulator seeking to intervene faces an interesting challenge with no obviously good options – albeit one of its own making. Another approach might have been to cap the most egregious practices – prohibit triple-route calculations, limit best-credit selection to 90 per cent of total credits, cap borderline zones at 1.5 percentage points.

    That would eliminate the worst outliers while preserving meaningful autonomy. The sector would likely comply minimally while claiming victory, but oodles of variation would remain.

    A stricter approach would be mandating identical algorithms – but would provoke rebellion. Devolved nations would refuse, citing devolved powers and triggering a constitutional comparison. Research intensive universities would mount legal challenges on academic freedom grounds, if they’re not preparing to do so already. Post-92s would deploy equity arguments, claiming standardisation harms universities serving disadvantaged students.

    A politically savvy but inadequate approach might have been mandatory transparency rather than prescription. Requiring universities to publish algorithms in standardised format with some underpinning philosophy would help. That might preserve autonomy while creating a bit of accountability. Maybe competitive pressure and reputational risk will drive voluntary convergence.

    But universities will resist even being forced to quantify and publicise the effects of their grading systems. They’ll argue it undermines confidence and damages the UK’s international reputation.

    Given the diversity of courses, providers, students and PSRBs, algorithms also feel like a weird thing to standardise. I can make a much better case for a defined set of subject awards, a shared governance framework (including subject benchmark statements, related PSRBs and degree algorithms) than I can for tightening standardisation in isolation.

    The fundamental problem is that the UK degree classification system was designed for a different age, a different sector and a different set of students. It was probably a fiction to imagine that sorting everyone into First, 2:1, 2:2 and Third was possible even 40 years ago – but today, it’s such obvious nonsense that without richer transcripts, it just becomes another way to drag down the reputation of the sector and its students.

    Unfit for purpose

    In 2007, the Burgess Review – commissioned by Universities UK itself – recommended replacing honours degree classifications with detailed achievement transcripts.

    Burgess identified the exact problems we have today – considerable variation in institutional algorithms, the unreliability of classification as an indicator of achievement, and the fundamental inadequacy of trying to capture three years of diverse learning in a single grade.

    The sector chose not to implement Burgess’s recommendations, concerned that moving away from classifications would disadvantage UK graduates in labour markets “where the classification system is well understood.”

    Eighteen years later, the classification system is neither well understood nor meaningful. A 2:1 at one institution isn’t comparable to a 2:1 at another, but the system’s facade of equivalence persists.

    The sector chose legibility and inertia over accuracy and ended up with neither – sticking with a system that protected institutional diversity while robbing students of the ability to show off theirs. As we see over and over again, a failure to fix the roof when the sun was shining means reform may now arrive externally imposed.

    Now the regulator is knocking on the conformity door, there’s an easy response. OfS can’t take an annual pop at grade inflation if most of the sector abandons the outdated and inadequate degree classification system. Nothing in the rules seems to mandate it, some UG quals don’t use it (think regulated professional bachelors), and who knows where the White Paper’s demand for meaningful exit awards at Level 4 and 5 fit into all of this.

    Maybe we shouldn’t be surprised that a regulator that oversees a meaningless and opaque medal system with a complex algorithm that somehow boils an entire university down to “Bronze”, “Silver” Gold” or “Requires Improvement” is keen to keep hold of the equivalent for students.

    But killing off the dated relic would send a really powerful signal – that the sector is committed to developing the whole student, explaining their skills and attributes and what’s good about them – rather than pretending that the classification makes the holder of a 2:1 “better” than those with a Third, and “worse” than those with a First.

    Source link

  • College Student Mental Health Remains a Wicked Problem

    College Student Mental Health Remains a Wicked Problem

    Just 27 percent of undergraduates describe their mental health as above average or excellent, according to new data from Inside Higher Ed’s main annual Student Voice survey of more than 5,000 undergraduates at two- and four-year institutions.

    Another 44 percent of students rate their mental health as average on a five-point scale. The remainder, 29 percent, rate it as below average or poor. 

    In last year’s main Student Voice survey, 42 percent of respondents rated their mental health as good or excellent, suggesting a year-over-year decline in students feeling positive about their mental health. This doesn’t translate to more students rating their mental health negatively this year, however, as this share stayed about the same. Rather, more students in this year’s sample rate their mental health as average (2025’s 44 percent versus 29 percent in 2024). 

    About the Survey

    Student Voice is an ongoing survey and reporting series that seeks to elevate the student perspective in institutional student success efforts and in broader conversations about college.

    Look out for future reporting on the main annual survey of our 2025–26 cycle, Student Voice: Amplified. Check out what students have already said about trust, artificial intelligence and academics, cost of attendance, and campus climate.

    Some 5,065 students from 260 two- and four-year institutions, public and private nonprofit, responded to this main annual survey about student success, conducted in August. Explore the data captured by our survey partner Generation Lab here and here. The margin of error is plus or minus one percentage point.

    The story is similar regarding ratings of overall well-being. In 2024, 52 percent of students described their overall well-being as good or excellent. This year, 33 percent say it’s above average or excellent. Yet because last year’s survey included slightly different categories (excellent, good, average, fair and poor, instead of excellent, above average, average, below average and poor), it’s impossible to make direct comparisons. 

    How does this relate to other national data? The 2024-2025 Healthy Minds Study found that students self-reported lower rates of moderate to severe depressive symptoms, anxiety and more for the third year in a row—what one co-investigator described as “a promising counter-narrative to what seems like constant headlines around young people’s struggles with mental health.” However, the same study found that students’ sense of “flourishing,” including self-esteem, purpose and optimism, declined slightly from the previous year. So while fewer students may be experiencing serious mental health problems, others may be moving toward the middle from a space of thriving.

    Inside Higher Ed’s leadership surveys this year—including the forthcoming Survey of College and University Student Success Administrators—also documented a gap between how well leaders think their institutions have responded to what’s been called the student mental health crisis and whether they think undergraduate mental health is actually improving. In Inside Higher Ed’s annual survey of provosts with Hanover Research, for example, 69 percent said their institution has been effective in responding to student mental health concerns, but only 40 percent said undergraduate health on their campus is on the upswing.

    Provosts also ranked mental health as the No. 1 campus threat to student safety and well-being (80 percent said it’s a top risk), followed by personal stress (66 percent), academic stress (51 percent) and food and housing insecurity (42 percent). Those were all far ahead of risks such as physical security threats (2 percent) or alcohol and substance use issues (13 percent).

    Among community college provosts, in particular, food and housing insecurity was the leading concern, with 86 percent naming it a top risk.

    Financial insecurity can impact mental health, and both factors can affect academic success. Among 2025 Student Voice respondents who have ever seriously considered stopping out of college (n=1,204), for instance, 43 percent describe their mental health as below average or poor. Among those who have never considered stopping out (n=3,304), the rate is just 23 percent. And among the smaller group of students who have stopped out for a semester or more but re-enrolled (n=557), 40 percent say their mental health is below average or poor, underscoring that returnees remain an at-risk group for completion.

    Similarly, 43 percent of students who have seriously considered stopping out rate their financial well-being as below average or poor, versus 23 percent among students who’ve never considered stopping out—the same split as the previous finding on mental health.

    The association between students’ confidence in their financial literacy and their risk of dropping out is weaker, supporting the case for tangible basic needs support: Some 25 percent of respondents who have considered stopping out rate their financial literacy as below average or poor, compared to 15 percent of those who have not considered stopping out.

    Angela K. Johnson, vice president for enrollment management at Cuyahoga Community College in Ohio, said her institution continuously seeks feedback from students about how their financial stability and other aspects of well-being intersect.

    “What students are saying by ‘financial’ is very specific around being unhoused, food insecurity,” she said. “And part of the mental health piece is also not having the medical insurance support to cover some of those ongoing services. We do offer some of them in our counseling and psychological services department, but we only offer so many.”

    All this bears on enrollment and persistence, Johnson said, “but it really is a student psychological safety problem, a question of how they’re trying to manage their psychological safety without their basic needs being met.”

    A ‘Top-of-Mind Issue’

    Tri-C, as Johnson’s college is called, takes a multipronged approach to student wellness, including via an app called Help Is Here, resource awareness efforts that target even dual-enrollment students and comprehensive basic needs support: Think food pantries situated near dining services, housing transition coordination, childcare referrals, utility assistance, emergency funds and more.

    Faculty training is another focus. “Sometimes you see a student sleeping in your class, but it’s not because the class is boring. They may have been sleeping in their car last night,” Johnson said. “They may not have had a good meal today.”

    Political uncertainty may also be impacting student wellness. The American Council on Education hosted a webinar earlier this year addressing what leaders should be thinking about with respect to “these uncertain times around student well-being,” said Hollie Chessman, a director and principal program officer at ACE. “We talked about identity, different identity-based groups and how the safe spaces and places are not as prevalent on campuses anymore, based on current legislation. So some of that is going to be impacting the mental health and well-being of our students with traditionally underrepresented backgrounds.”

    Previously released results from this year’s Student Voice survey indicate that most students, 73 percent, still believe that most or nearly all of their peers feel welcomed, valued and supported on campus. That’s up slightly from last year’s 67 percent. But 32 percent of students in 2025 report that recent federal actions to limit diversity, equity and inclusion efforts have negatively impacted their experience at college. This increases to 37 percent among Asian American and Pacific Islander and Hispanic students, 40 percent among Black students and 41 percent among students of other races. It decreases among white students, to 26 percent. Some 65 percent of nonbinary students (n=209) report negative impacts. For international students (n=203), the rate is 34 percent.

    The Student Voice survey doesn’t reveal any key differences among students’ self-ratings of mental health by race. Regarding gender, 63 percent of nonbinary students report below average or poor mental health, more than double the overall rate of 29 percent. In last year’s survey, 59 percent of nonbinary students reported fair or poor mental health.

    In a recent ACE pulse survey of senior campus leaders, two in three reported moderate or extreme concern about student mental health and well-being. (Other top concerns were the value of college, long-term financial viability and generative artificial intelligence.)

    “This is a top-of-mind issue, and it has been a top-of-mind issue for college and university presidents” since even before the pandemic, Chessman said. “And student health and well-being is a systemic issue, right? It’s not just addressed by a singular program or a counseling session. It’s a systemic issue that permeates.”

    In Inside Higher Ed’s provosts’ survey, the top actions these leaders reported taking to promote mental health on their campus in the last year are: emphasizing the importance of social connection and/or creating new opportunities for campus involvement (76 percent) and investing in wellness facilities and/or services to promote overall well-being (59 percent).

    Despite the complexity of the issue, Chessman said, many campuses are making strides in supporting student well-being—including by identifying students who aren’t thriving “and then working in interventions to help those students.” Gatekeeper training, or baseline training for faculty and staff to recognize signs of student distress, is another strategy, as is making sure faculty and staff members can connect students to support resources, groups and peers.

    “One of the big things that we have to emphasize is that it is a campuswide issue,” Chessman reiterated.

    More on Health and Wellness

    Other findings on student health and wellness from this newest round of Student Voice results show:

    1. Mental health is just one area of wellness in which many students are struggling.

    Asked to rate various dimensions of their health and wellness at college, students are most likely to rate their academic fit as above average or excellent, at 38 percent. Sense of social belonging (among other areas) is weaker, with 27 percent of students rating theirs above average or excellent. One clear opportunity area for colleges: promoting healthy sleep habits, since 44 percent of students describe their own as below average or poor. (Another recent study linked poor sleep among students to loneliness.)

    1. Many students report using unhealthy strategies to cope with stress, and students at risk of stopping out may be most vulnerable.

    As for how students deal with stress at college, 56 percent report a mix of healthy strategies (such as exercising, talking to family and friends, and prioritizing sleep) and unhealthy ones (such as substance use, avoidance of responsibilities and social withdrawal). But students who have seriously considered stopping out, and those who have stopped out but re-enrolled, are less likely than those who haven’t considered leaving college to rely on mostly healthy and effective strategies.

    1. Most students approve of their institution’s efforts to make key student services available and accessible.

    Despite the persistent wellness challenge, most students rate as good or excellent their institution’s efforts to make health, financial aid, student life and other services accessible and convenient. In good news for community colleges’ efforts, two-year students are a bit more likely than their four-year peers to rate these efforts as good or excellent, at 68 percent versus 62 percent.

    ‘It’s Easy to Feel Isolated’

    The Jed Foundation, which promotes emotional health and suicide prevention among teens and young adults, advocates a comprehensive approach to well-being based on seven domains:

    • Foster life skills
    • Promote connectedness and positive culture
    • Recognize and respond to distress
    • Reduce barriers to help-seeking
    • Ensure access to effective mental health care
    • Establish systems of crisis management
    • Reduce access to lethal means

    At JED’s annual policy summit in Washington, D.C., this month, advocates focused on sustaining the progress that has been made on mental health, as well as on the growing influence of artificial intelligence and the role of local, state and federal legislation on mental health in the digital age. Rohan Satija, a 17-year-old first-year student at the University of Texas at Austin who spoke at the event, told Inside Higher Ed in an interview that his mental health journey began in elementary school, when his family emigrated from New Zealand to Texas.

    “Just being in a completely new environment and being surrounded by a completely new group of people, I struggled with my mental health, and because of bullying and isolation at school, I struggled with anxiety and panic attacks,” he said.

    Satija found comfort in books and storytelling filled with “characters whom I could relate to. I read about them winning in their stories, and it showed me that I could win in my own story.”

    Satija eventually realized these stories were teaching lessons about resilience, courage and empathy—lessons he put into action when he founded a nonprofit to address book deserts in low-income and otherwise marginalized communities in Texas. Later, he founded the Vibrant Voices Project for incarcerated youth, “helping them convert their mental health struggles into powerful monologues they can perform for each other.”

    Currently a youth advocacy coalition fellow at JED, Satija said that college so far presents a challenge to student mental health in its “constant pressure to perform in all facets, including academically and socially and personally. I’ve seen many of my peers that have entered college with me, and a lot of us expect freedom and growth but get quickly bogged down with how overwhelming it can be to balance coursework, jobs, living away from your family and still achieving.”

    Students speak on a panel and the annual JED policy summit.

    Rohan Satija, center, speaks at JED’s annual policy summit in Washington earlier this month.

    He added, “This competitive environment can make small setbacks feel like failures, and I’d say perfectionism can often become kind of like a silent standard.”

    Another major challenge? Loneliness and disconnection. “Even though campuses are full of people, it’s easy to feel isolated, especially as a new student, and even further, especially as a first-generation student, an immigrant or anyone far from home.”

    While many students are of course excited for the transition to adulthood and “finally being free for the first time,” he explained, “it comes with a lot of invisible losses, including losing the comfort of your family and a stable routine … So I think without intentional efforts to build connection in your new college campus, a lot of students feel that their sense of belonging can erode pretty quickly.”

    In this light, Satija praised UT Austin’s club culture, noting that some of the extracurricular groups he’s joined assign a “big,” or student mentor, to each new student, or “little,” driving connection and institutional knowledge-sharing. Faculty members are also good at sharing information about mental health resources, he said, including through the learning management system.

    And in terms of proactive approaches to overall wellness, the campus’s Longhorn Wellness Center is effective in that it “doesn’t promote itself as this big, like, crisis response space: ‘Oh, we’re here to improve your mental health. We’re here to make your best self,’ or anything like that,” he said. “It literally just promotes itself as a chill space for student wellness. They’re always talking about their massage chairs.”

    “That gets students in the door, yeah?” Satija said.

    This independent editorial project is produced with the Generation Lab and supported by the Gates Foundation.

    Source link

  • The Black Box Problem: Why Cameras Matter in the Online Classroom – Faculty Focus

    The Black Box Problem: Why Cameras Matter in the Online Classroom – Faculty Focus

    Source link

  • The Black Box Problem: Why Cameras Matter in the Online Classroom – Faculty Focus

    The Black Box Problem: Why Cameras Matter in the Online Classroom – Faculty Focus

    Source link

  • Visa oversubscription at UCL may be more than just a PR problem

    Visa oversubscription at UCL may be more than just a PR problem

    Richard Adams’ reporting for the Guardian sets out the immediate fallout.

    Hundreds of international students, including around 200 from China, are stranded after UCL admitted it had run out of Confirmation of Acceptance for Studies (CAS) allocations.

    The Guardian reports that many have already spent thousands on flights and accommodation – others are already in the UK and now face deportation.

    Comments like this one on Reddit illustrate the issue:

    On September 22nd, I suddenly received a notice from UCL, telling me that the issuance of CAS had been suspended… the only option they’ve given is to defer my enrolment to 2026. I’ve already rented a flat and the money is non-refundable.

    The reputational damage may spread from UCL. A YouTube video entitled “UK university cancels CAS letters” lists causes like overbooking and compliance checks without actually mentioning UCL. And a look at Chinese-language spaces suggests that story has gone semi viral – re-told and amplified with screenshots said to be from affected cohorts.

    UCL told us that it’s urgently working with the Home Office to secure additional CAS numbers and is doing everything it can to resolve this as quickly as possible:

    In the meantime, we are contacting affected students directly to explain the situation, offer our sincere apologies, and provide support including the option to defer their place to next year.

    The short-term picture is reputational damage and urgent negotiations with the Home Office. But potentially, the longer-term problem is consumer law – and the conflicting risks and incentives that our immigration regime and the consumer protection regime creates.

    Push me pull you

    Universities, of course, have to apply to the Home Office for CAS (Confirmation of Acceptance for Study) numbers. The number allocated is based on how many international students each university expects to admit.

    They have to aim to be as accurate as possible – they’re not permitted to significantly over-estimate these figures as a precaution.  The problem this year for UCL is as follows:

    We’ve experienced significantly more applications and acceptances of offers than anticipated, and as a result, we have exceeded the number of Confirmation of Acceptance for Studies (CAS) numbers allocated to us by the Home Office. Our planning is based on historical data and expected trends which take account of attrition rates and other factors.

    For all universities, the numbers are always estimates. This is because, in any one year, more offer holders than expected may accept their place, or more students may meet the academic requirements than in previous years – both of which increase demand for CAS allocations.

    The question then is how to manage the risks – not least because as well as worries about over-recruiting, as per the Legal Migration white paper, UKVI will soon be demanding a visa refusal rate of less than 10 per cent and a course enrolment rate of at least 90 per cent of CASs issued.

    UUKi’s advice on that looks like this:

    Universities may wish to consider reviewing their deposit requirements alongside their diversification plans to help ensure applicants are genuine students and intent on studying. This could include introducing or increasing deposits or introducing earlier deposit deadlines.

    It’s not hard to see how immigration policy pushes universities towards locking students in once they apply, and then having to take steps to limit the impact if a surprising number then accept and/or meet any offer made.

    The problem is that those steps may not be compatible with protections students are supposed to have. In other words, it may not be quite as simple as it looks to transfer the risks being loaded onto universities onto students.

    CMA’s earlier warnings

    You may remember that after the pandemic admissions crunch caused by those mutant algorithms, the CMA issued specific advice reminding universities that:

    Universities and colleges should not make binding offers which they know they may not be able to honour, and should avoid terms which allow them wide discretion to withdraw offers once accepted.

    Then in updated CMA guidance to universities in 2023, the same themes recur:

    Institutions must provide prospective students with clear, accurate, comprehensive, unambiguous and timely information about courses, teaching, teaching locations and any limiting conditions.

    And echoing its Statement on Admissions, the guidance stresses that terms allowing a university excessive discretion to withdraw or change the service must be fair:

    HE providers should not use terms which allow wide discretion to vary or cancel aspects of the educational service after an offer has been accepted, or to limit or exclude liability for failure to provide what was promised.

    Non-refundable deposits

    Like most universities, UCL’s Tuition Fee Deposits Policy 2025 says deposits are:

    …typically non-refundable if the offer-holder simply chooses not to enrol or is unable to enrol for reasons within their control.

    Refund routes are narrow – visa refusal, academic failure, programme cancellation, scholarship funding – and discretionary. Refunds may also be reduced by bank charges or currency fluctuations.

    The CMA’s unfair terms guidance (CMA37) says that deposits must reflect a trader’s pre-estimate of the loss, not operate as punitive lock-ins.

    Paragraph 5.14 warns that forcing consumers to forfeit prepayments:

    …is open to serious objection where it bears no relation to the business’s actual costs.

    Where universities use deposits to insure against under-recruitment, the price is often borne by students – in ways consumer law regards as unfair.

    UCL told us that:

    Tuition Fee Deposits are not intended to deter withdrawals and represent a genuine estimate of the loss suffered where an individual doesn’t enrol. UCL specifically sets out that Tuition Fee Deposits aren’t non-refundable in all circumstances.

    Acts of god

    Meanwhile, UCL’s terms and conditions allow it to cancel programmes and treat “under or over demand for courses or modules” as an “event outside our control.”

    In the undergraduate version, Section 15 lists over or under-subscription alongside things like government restrictions and industrial action as circumstances for which UCL “will not be responsible or liable for failure to perform.”

    And under Section 5, UCL may withdraw or cancel a programme and will then “use commercially reasonable endeavours” to offer a suitable alternative or permit withdrawal.

    The CMA’s HE consumer law advice is explicit that providers must not draft broad discretionary rights to withdraw courses after offers have been accepted. Terms must be narrow, transparent, and balanced – and force majeure cannot be used to cover risks the provider should reasonably plan for.

    In what appears to be the CMA’s view, oversubscription is not an act of God – it’s a business choice.

    UCL’s terms also cap its liability for breach of contract at twice the tuition fee, and exclude responsibility for consequential losses – including travel, accommodation, and visa fees.

    But under the Consumer Rights Act 2015, suppliers can’t exclude liability for foreseeable losses arising from their own breach – and the CMA warns against blanket exclusions of precisely these losses.

    If students have rented expensive private halls or bought non-refundable flights on the strength of UCL’s assurances, those look potentially like foreseeable losses. Trying to exclude them may not survive scrutiny under the fairness test.

    The university told us that:

    UCL does not seek to limit or exclude liability that it cannot lawfully limit or exclude and accepts a fair and reasonable allocation of liability in the terms.

    The exacerbating issue is that evidence on student forums appears to show that UCL knew weeks before the term that there could be a capacity issue.

    UCL states that first-year undergraduates who meet the published criteria – such as applying by the deadline and firmly accepting their offer – are “guaranteed” a place in UCL accommodation.

    But posts on student forums suggest that by early September some applicants were being told the guarantee had effectively become a “priority” allocation because of high demand, leaving students scrambling for private halls after cheaper options had gone.

    It means that many are now locked into costly private housing contracts, without a contractual route to compensation because the contract expressly excludes accommodation losses.

    The university’s UG terms say:

    UCL does not accept any liability for loss that does not flow naturally from a breach of its obligations under these Terms. This is often referred to as indirect or consequential loss. In addition, particular types of loss that UCL does not accept liability for, whether direct or indirect and whether considered a possibility at the time the contractual relationship came into effect, are loss of earnings (including delay in receipt of potential earnings), loss of opportunity, loss of profit and loss of your data.

    That could also be a classic example of an unfair exclusion clause under the Consumer Rights Act.

    All of this lands at a time when UCL is, as a first target in a likely series of claims, already preparing to defend itself in the High Court against claims from students over pandemic and strike disruption. That trial, due to begin in early 2026, may test amongst other things whether the “force majeure” clauses that universities have relied on to exclude liability are enforceable at all.

    The CMA has long said that force majeure clauses covering a university’s own staff strikes are likely unlawful, and OfS has echoed concerns in its guidance. In UCL’s case, the test claims may explore whether something truly uncontrollable in March 2020 became predictable – and therefore compensable – over time.

    That context matters because UCL’s oversubscription response leans on similar legal logic – that over-demand is “outside its control” and liability for students’ losses is capped. Regulators, adjudicators and courts could now be asked whether these contract clauses are actually fair.

    A risky model

    Recruiting large numbers of international students is inherently volatile. Visa policies change, attrition rates fluctuate, and global demand can surge unexpectedly. But while the business model may be risky, in theory the law prevents the transfer of that risk onto students via hefty deposits, discretionary refunds, cancellation rights or liability caps.

    In other words, an airline can take the risk of overbooking a flight – but if it does, you have the right to compensation – as well as a choice between a refund or an alternative flight.

    In many ways, UKVI and Home Office policy pushes universities towards the sorts of risk management practices that consumer law was designed to rule out.

    But the problem may not only be universities sometimes over-recruit. It may be that they do so on terms that attempt to ensure they are protected, while students are not.

    It’s not yet clear whether UCL is committing to compensation – or seeking to rely on the terms that would, on the face of it, allow it to avoid compensating.

    But if the pandemic/strikes litigation establishes that universities cannot contract away responsibility with sweeping force majeure clauses, oversubscription could become the next flashpoint in regulation and the courts – with real implications across the sector.

    ======

    A UCL spokesperson said:

    This year, UCL has seen an extraordinary surge in demand from international students, a reflection of our global reputation and the value students place on a UCL education.

    We’ve experienced significantly more applications and acceptances of offers than anticipated, and as a result, we have exceeded the number of Confirmation of Acceptance for Studies (CAS) numbers allocated to us by the Home Office. Our planning is based on historical data and expected trends which take account of attrition rates and other factors.

    We are urgently working with the Home Office to secure additional CAS numbers and are doing everything we can to resolve this as quickly as possible. In the meantime, we are contacting affected students directly to explain the situation, offer our sincere apologies, and provide support including the option to defer their place to next year.

    We also recognise that some of our recent communications have caused confusion and uncertainty, and we are sincerely sorry for that. We are committed to supporting every student impacted by this and are grateful for their patience and understanding as we work to find a solution.

    An Office for Students spokesperson said:

    All registered universities and colleges must show that they’ve given due regard to CMA guidance about how to comply with consumer protection law in developing and implementing their policies, procedures, and terms and conditions. Students invest a significant amount of time and money in their studies and it’s important that their consumer rights are protected when making this investment.

    Source link

  • The Growing Problem of Scientific Research Fraud

    The Growing Problem of Scientific Research Fraud

    When a group of researchers at Northwestern University uncovered evidence of widespread—and growing—research fraud in scientific publishing, editors at some academic journals weren’t exactly rushing to publish the findings.

    “Some journals did not even want to send it for review because they didn’t want to call attention to these issues in science, especially in the U.S. right now with the Trump administration’s attacks on science,” said Luís A. Nunes Amaral, an engineering professor at Northwestern and one of the researchers on the project. “But if we don’t, we’ll end up with a corrupt system.”

    Last week Amaral and his colleagues published their findings in the Proceedings of the National Academy of Sciences of the United States of America. They estimate that they were able to detect anywhere between 1 and 10 percent of fraudulent papers circulating in the literature and that the actual rate of fraud may be 10 to 100 times more. Some subfields, such as those related to the study of microRNA in cancer, have particularly high rates of fraud.

    While dishonest scientists may be driven by pressure to publish, their actions have broad implications for the scientific research enterprise.

    “Scientists build on each other’s work. Other people are not going to repeat my study. They are going to believe that I was very responsible and careful and that my findings were verified,” Amaral said. “But If I cannot trust anything, I cannot build on others’ work. So, if this trend goes unchecked, science will be ruined and misinformation is going to dominate the literature.”

    Luís A. Nunes Amaral

    Numerous media outlets, including The New York Times, have already written about the study. And Amaral said he’s heard that some members of the scientific community have reacted by downplaying the findings, which is why he wants to draw as much public attention to the issue of research fraud as possible.

    “Sometimes it gets detected, but instead of the matter being publicized, these things can get hidden. The person involved in fraud at one journal may get kicked out of one journal but then goes to do the same thing on another journal,” he said. “We need to take a serious look at ourselves as scientists and the structures under which we work and avoid this kind of corruption. We need to face these problems and tackle them with the seriousness that they deserve.”

    Inside Higher Ed interviewed Amaral about how research fraud became such a big problem and what he believes the academic community can do to address it.

    (This interview has been edited for length and clarity.)

    Q: It’s no secret that research fraud has been happening to some degree for decades, but what inspired you and your colleagues to investigate the scale of it?

    A: The work started about three years ago, and it was something that a few of my co-authors who work in my lab started doing without me. One of them, Jennifer Byrne, had done a study that showed that in some papers there were reports of using chemical reagents that would have made the reported results impossible, so the information had to be incorrect. She recognized that there was fraud going on and it was likely the work of paper mills.

    So, she started working with other people in my lab to find other ways to identify fraud at scale that would make it easier to uncover these problematic papers. Then, I wanted to know how big this problem is. With all of the information that my colleagues had already gathered, it was relatively straightforward to plot it out and try to measure the rate at which problematic publications are growing over time.

    It’s been an exponential increase. Every one and a half years, the number of paper mill products that have been discovered is doubling. And if you extrapolate these lines into the future, it shows that in the not-so-distant future these kinds of fraudulent papers would be the overwhelming majority within the scientific literature.

    A line graph showing all scientific articles, paper mill products, PubPeer-commented, and retracted papers. The Y axis is number of articles and the X axis is year of publication. All the lines are going up, but the red line for paper mill products is rising fastest.

    Proceedings of the National Academy of Sciences of the United States of America

    Q: What are the mechanisms that have allowed—and incentivized—such widespread research fraud?

    A: There are paper mills that produce large amounts of fake papers by reusing language and figures in different papers that then get published. There are people who act as brokers between those that create these fake papers, people who are putting their name on the paper and those who ensure that the paper gets published in some journal.

    Our paper showed that there are editors—even for legitimate scientific journals—that help to get fraudulent papers through the publishing process. A lot of papers that end up being retracted were handled and accepted by a small number of individuals responsible for allowing this fraud. It’s enough to have just a few editors—around 30 out of thousands—who accept fraudulent papers to create this widespread problem. A lot of those papers were being supplied to these editors by these corrupt paper mill networks. The editors were making money from it, receiving citations to their own papers and getting their own papers accepted by their collaborators. It’s a machine.

    Science has become a numbers game, where people are paying more attention to metrics than the actual work. So, if a researcher can appear to be this incredibly productive person that publishes 100 papers a year, edits 100 papers a year and reviews 100 papers a year, academia seems to accept this as natural as opposed to recognizing that there aren’t enough hours in the day to actually do all of these things properly.

    If these defectors don’t get detected, they have a huge advantage because they get the benefits of being productive scientists—tenure, prestige and grants—without putting in any of the effort. If the number of defectors starts growing, at some point everybody has to become a defector, because otherwise they are not going to survive.

    Q: [Your] paper found a surge in the number of fraudulent research papers produced by paper mills that started around 2010. What are the conditions of the past 15 years that have made this trend possible?

    A: There were two things that happened. One of them is that journals started worrying about their presence online. It used to be that people would read physical copies of a journal. But then, only looking at the paper online—and not printing it—became acceptable. The other thing that became acceptable is that instead of subscribing to a journal, researchers can pay to make their article accessible to everyone.

    These two trends enabled organizations that were already selling essays to college students or theses to Ph.D. students to start selling papers. They could create their own journals and just post the papers there; fraudulent scientists pay them and the organizations make nice money from that. But then these organizations realized that they could make more money by infiltrating legitimate journals, which is what’s happening now.

    It’s hard for legitimate publishers to put an end to it. On the one hand, they want to publish good research to maintain their reputation, but every paper they publish makes them money.

    Q: Could the rise of generative AI accelerate research fraud even more?

    A: Yes. Generative AI is going to make all of these problems worse. The data we analyzed was before generative AI became a concern. If we repeat this analysis in one year, I would imagine that we’ll see an even greater acceleration of these problematic papers.

    With generative AI in the picture, you don’t actually need another person to make fake papers—you can just ask ChatGPT or another large language model. And it will enable many more people to defect from doing actual science.

    Q: How can the academic community address this problem?

    A: We need collective action to resist this trend. We need to prevent these things from even getting into the system, and we need to punish the people that are contributing to it.

    We need to make people accountable for the papers that they claim to be authors of, and if someone is bound to engage in unethical behavior, they should be forbidden from publishing for a period of time commensurate with the seriousness of what they did. We need to enable detection, consequences and implementation of those consequences. Universities, funding agencies and journals should not hide, saying they can’t do anything about this.

    This is about demonstrating integrity and honesty and looking at how we are failing with clear eyes and deciding to take action. I’m hoping that the scientific enterprise and scientific stakeholders rise to that challenge.

    Source link

  • The Problem with Capitulating to Fascism in Higher Education

    The Problem with Capitulating to Fascism in Higher Education

    Higher education serves different purposes for different people. For some, it represents transformation and expanded horizons. For others, it remains a site of oppression—a place where white supremacy and anti-Blackness flourish while administrations proclaim commitments to diversity even as their actions contradict these stated values. The commitments to diversity, equity, and inclusion (DEI) have long been performative at most predominantly white institutions (PWIs). Now, institutions no longer need to maintain even this pretense.

    Dr. Frederick Engram JrThe current presidential administration has made anti-Black, anti-immigrant, anti-LGBTQ+, and anti-women policies central to its agenda. We are not approaching fascism—we are immersed in it. The fundamental problem with higher education and liberal politics more broadly is that while we all recognized the warning signs, no substantive preventative measures were taken to counter the impending assault.

    When the previous Trump administration targeted K-12 education—falsely claiming that critical race theory was being taught in elementary schools and suspending administrators in states like Texas—higher education watched passively, believing itself safe from similar attacks. Instead of mounting resistance and uniting against authoritarian overreach, higher education capitulated. Institutions cancelled classes and programs designed to educate students about historical injustices, prioritizing the comfort of white students and families while disregarding everyone else.

    As Professor Emeritus Dr. John R. Thelin documents in his seminal work A History of American Higher Education, the system was designed from its inception to serve wealthy, white, cisgender, able-bodied men. Higher education was never intended to include marginalized people of color or women. The argument that white men are now being excluded from spaces where they have always been centered would be absurd if it weren’t so dangerous.

    Anti-discrimination DEI initiatives became necessary precisely because white men were not voluntarily making space for others—supported by white women who were themselves fighting for inclusion. The notion that white men feel excluded from higher education reflects a false sense of entitlement and the sting of having their mediocrity exposed. This wounded sense of supremacy drives them to destroy institutions rather than share them.

    Fascism is not approaching—it has arrived.

    The targeted attacks on Harvard, UCLA, University of Pennsylvania, minority-serving institutions (MSIs), and historically Black colleges and universities (HBCUs) are rooted in anti-Black rhetoric that was explicitly outlined in Project 2025. This blueprint seeks to create a dystopian America where marginalized voices are silenced and governance is built around white anxieties and grievances.

    The worst possible response from higher education institutions is capitulation. Instead of forming coalitions, deploying legal resources, and mobilizing their extensive alumni networks, institutions are either confronting this administration in isolation or retreating into silence. Someone should inform higher education that fascism doesn’t reward compliance. It seeks total destruction and will not protect those who failed to oppose it simply because they remained quiet.

    Our institutions and academic disciplines face existential threats. Regardless of how compliant we choose to be, when the destruction is complete, nothing will remain standing. We cannot measure progressive politics by white comfort levels, nor should white feelings determine whether we defend the most vulnerable among us.

    Understanding liberation and resistance in this moment requires recognizing that active opposition is our only viable option. Millions have died, millions are dying, and millions more await death—all to satisfy the bloodlust of mediocre leaders drunk on power. Our resistance must be meaningful and sustained.

    What purpose will silence serve when we lose everything anyway?

    The time for half-measures and performative gestures has passed. Higher education must choose between principled resistance and institutional suicide. The stakes could not be higher, and history will judge our response.

    _________

    Dr. Frederick Engram Jr. is an assistant professor of higher education at at Fairleigh Dickinson University.

    Source link

  • Antisemitism Is Not a Problem at George Mason (opinion)

    Antisemitism Is Not a Problem at George Mason (opinion)

    Ages ago, in the 1970s Soviet Union, a Jewish stand-up comedian, Mikhail Zhvanetski, remarked in one of his skits that if you want to argue about the taste of coconuts (not available in the Soviet Union at that time), it’s better to talk to those who’ve actually tried them.

    If you want to argue about antisemitism in academia, better ask those who have actually experienced it. Ask me.

    I was 16 years old when I graduated from high school in Moscow in 1971. My ethnic heritage—Jewish—was written on my state ID by the authorities. I couldn’t change it. I applied to the “Moscow MIT”: Moscow Institute of Physics and Technology. I passed the entrance tests with flying colors: 18 points out of 20, higher than 85 percent of those admitted. I was denied entry. I knew why. The unwritten but strict quota was that Jews could make up no more than 2 percent of freshmen.

    I did get my education, at another university less closely observed by the party authority. But six years later, looking for a job, I could not find one. In part, this was because institute directors knew they could be disciplined if they hired Jews who then applied to emigrate to Israel. I later learned that I was hired only when my future boss and close friend gave his word of honor that I would never try to emigrate.

    Two years later, I applied for Ph.D. study at the renowned Lebedev Physical Institute of the Russian Academy of Sciences (home to seven Nobel laureates). It was common knowledge at that time that one of the officials at Lebedev who had to approve admissions was a notorious antisemite. My gentile adviser also knew that, made sure that the official would never see either my characteristically Jewish face or my state ID, and took over all paperwork communications himself under various pretexts. When I was officially admitted and walked into the official’s office, they looked like they were going to have a heart attack. This was antisemitism.

    In 1994, 10 years after graduating, I moved to the United States, where, eventually, I devoted more than 20 years of service to the Naval Research Laboratory. Then, in 2019, I joined the faculty at George Mason University, one of the most ethnically diverse universities in the country. In my time here, I have never seen any sign of antisemitism, not a shred. I graduated a Muslim student, who—in his own words—felt honored to have me as his adviser (he even invited me to his sister’s wedding, which was restricted, due to the pandemic, to just 20 guests). I taught several more Muslim students and did research with some others. We openly discussed our religions, and I found these students to be good and compassionate listeners if I chose to share one or another story from my Jewish experience.

    Now, however, the U.S. Department of Education is taking seriously a charge of “a pervasive hostile environment for Jewish students and faculty” at George Mason. This is as shocking to me (and to many of my Jewish colleagues at GMU) as hearing that I have broken two legs and never noticed it. In fact, during the trying months after Oct. 7 and amid growing pro-Palestinian protests on campuses, I often praised Mason president Gregory Washington’s handing of this sensitive issue. While paying full respect to respectful protests, freedom of speech and the First Amendment, he fully avoided disruption of the educational process and university business.

    To this point, I can again dig into my experience under a totalitarian regime. When I came to America in 1994, I was fascinated by the famous case of Yates v. U.S., in which the Supreme Court issued a decision that offered a powerful contrast to Soviet rule. In that 1957 case, the court reversed the convictions of 14 Communist leaders in California who had been charged with advocating for the overthrow of the U.S. government by force. As Justice Black wrote, they “were tried upon the charge that they believe in and want to foist upon this country a different, and, to us, a despicable, form of authoritarian government in which voices criticizing the existing order are summarily silenced. I fear that the present type of prosecutions are more in line with the philosophy of authoritarian government than with that expressed by our First Amendment.”

    To me, this case reflected a quintessential characteristic of American democracy: rephrasing Voltaire, “We may find your view despicable, but will defend to the death your right to say it.”

    Though the details of the antisemitism complaint against George Mason have not been made public, it appears that Washington’s leadership is coming under attack based on just two cases involving three students; only one of those cases involved an alleged incident (vandalism) that occurred on campus. In both cases, the university administration, in collaboration with law enforcement, took immediate and harsh steps to resolve the situations: As Washington noted in a recent message to campus, the university was applauded by the Jewish Community Relations Council of Greater Washington for “deploying the full weight of the university’s security and disciplinary measures to prevent these students from perpetrating harm on campus.”

    And these incidents are outliers. Just as three thieves who may be GMU students wouldn’t attest to “pervasive thievery” on campus, three students alleged to have violent anti-Israeli agendas do not constitute a “pervasive hostile environment for Jewish students and faculty.” On the contrary, I feel safer and more assured knowing that three miscreants out of a student body of 40,000 were immediately and efficiently dealt with.

    What does make me feel uncomfortable—and what I do find antisemitic— is the implicit suggestion that I, an American Jew who does not have Israeli citizenship, must feel offended and defensive in the face of any criticism of any action of the Israeli government. I find such beliefs reprehensible, and they encroach on my freedom to have my own opinion about international affairs.

    Gregory Washington is my president, and I am confident that he is doing an excellent job protecting all faculty and students, including Jews, from bigotry and harassment. It is false allegations of antisemitism on campus under the pretext of “defending” Jews like myself that really threatens my well-being as a GMU professor.

    Igor Mazin is a professor of physics at George Mason University.

    Source link

  • The crisis in the youth sector is a big problem for universities

    The crisis in the youth sector is a big problem for universities

    It is hard for universities to see beyond their own sector crisis right now, but the crisis facing the youth sector today will be the problem of universities tomorrow.

    The youth sector in the UK greatly contributes towards supporting students and graduates of the future, but it is currently under threat and the deepest impact will come for those young people who face the highest barriers to accessing higher education.

    The youth sector engages young people to develop their critical skills for life, including how to build relationships with peers; resilience and developing social and emotional skills; and how to integrate into a community. Many within the higher education sector will recognise these as areas which students and graduates are also struggling with.

    At a time where universities are being called upon to widen access for young people, the reality is young people are facing narrower opportunities than ever. The challenge for widening participation teams will be multifaceted, including supporting attainment raising in schools; tackling entrenched views from schools and families of expectations of what their children can achieve; and providing the support needed for widening participation students to progress well once in higher education.

    So how can the higher education sector help ensure that the challenges the youth sector are facing today don’t become a nightmare for widening participation teams to tackle in the future?

    What is happening in the youth sector?

    The youth sector includes large organisations such as UK Youth, Scouts and Girlguiding, to smaller grassroots organisations who run clubs and activities in and out of schools and community centres across the country.

    There are many similarities between the crises facing the higher education sector and that of the youth sector. Much like universities, the youth sector has faced years of substantial defunding. A YMCA England and Wales report on The state of funding for youth services found that “local authority expenditure on youth services has fallen 73% in England and 27% in Wales since 2010-11” which “represents a real-term cut of £1.2bn to youth services between 2010-11 to 2023-24 in England, and £16.6m in Wales.”

    At the same time as these cuts, the rate of young people who are NEET (not in education, employment or training) is growing, with 13.2 per cent of 16-24 year olds reported as NEET in 2024, and 15.6 per cent of 18-24 year olds NEET. Both figures have increased compared to previous years, particularly in young men. These young people need support and youth services are increasingly unable to provide it.

    Organisations and charities who have been supporting the youth sector are closing at a rapid rate. The National Citizen Service (NCS), a national youth social action programme which has been running since 2009, has been cut by the Labour Government. Student Hubs, the social action charity I worked with which supported students to engage in social and environmental action, has closed. YMCA George Williams College, an organisation which supported the youth sector to improve monitoring, evaluation and impact of their activities closed on 31 March 2025 to the shock of many across the youth sector.

    Whilst the Government’s National Youth Strategy announced in November 2024 is welcome, it will not fix years of systematic underfunding of youth sector services.

    How will this crisis impact universities?

    David Kernohan’s analysis of the UCAS 2025 application figures shows that applications are down, with only applicants from the most advantaged quintile, IMD quintile 5, having improved. We are in the midst of what could be a big decline in the rate of students coming from disadvantaged backgrounds entering higher education, despite the transformative opportunities it provides.

    This comes at a time where there is greater expectation by the government and the regulator for universities to be proactive in supporting students’ and young people’s skills, learning and access to opportunity. In February the Office for Students announced successful providers in their latest funding round to deliver projects which tackle Equality of Opportunity Risk Register areas. The register supports universities to consider barriers in the student life cycle and how they might mitigate against these.

    Seeing the range of projects which have been awarded funding, it is clear that universities are being pushed to go further in imagining what their role is in shaping the lives of the students they engage, and it starts significantly earlier than freshers’ week. This funding shows that more emphasis is being put on universities to address barriers to participation by the Office for Students, and with the youth sector in crisis, this may need to become even wider if universities are to fulfil their access missions.

    Thankfully, there are actions universities can take now which will make a difference both to young people and widening participation teams.

    Tackling the problems together

    The youth sector cannot afford to wait. If universities want to be ready to meet the challenges of tomorrow, they need to build strong collaborative relationships with organisations already situated in communities whilst they are still here. Partnership with the youth sector offers an opportunity to enhance university strategic activity whilst making genuine social and economic impact.

    Universities could be doing more to provide expertise on monitoring and evaluation of youth activities, enhancing quality of local activities, and conducting research to support future outcomes. There’s an opportunity for universities to learn from these partnerships too, particularly because the youth sector has a range of expertise which is highly applicable to the work the sector is doing in broadening their widening participation and civic strategies. These partnerships will sometimes be informal and sometimes they might be formalised through knowledge exchange programmes like student consultancy.

    Students can play a big role in linking universities and youth services. Research conducted by the National Youth Agency in 2024 found “that fewer than seven per cent of respondents to a national survey of youth workers are under 26 years old”. There is a desperate need for youth workers and particularly under-30s to support the sector. Student Hubs’ legacy resources detail the approach we took to supporting students to volunteer in local schools, libraries and community centres to provide free support to young people as part of place-based programmes with universities.

    Universities and students’ unions have spaces they are looking to commercialise, whilst also trying to give students jobs on campus. Universities and students’ unions could work collaboratively with community groups to use spaces on campus, provide student work through staffing them, and in turn support young people and families to access campus facilities.

    The time is now

    One of the hallmarks of a crisis is communities coming together to meet challenges head on, and universities shouldn’t wait to be invited. Trust will need to be built and relationships take time to forge.

    The best time to start is now. Universities should mobilise whilst there is still a youth sector left to support, or the void left by the lack of youth services means universities’ involvement in young people’s lives is going to become even larger.

    Source link