Tag: system

  • UC System Reverses Decision to End Incentives for Postdocs

    UC System Reverses Decision to End Incentives for Postdocs

    Justin Sullivan/Getty Images

    In a letter to system chancellors Tuesday, University of California system president James Milliken said he would not end financial support for hiring postdoctoral fellows out of the UC President’s Postdoctoral Fellowship Program. 

    A system spokesperson told Inside Higher Ed earlier this month that the UC office had decided to halt its $85,000 per fellow, per year, hiring incentives beginning with fellows hired as full-time faculty after summer 2025. 

    “Given the myriad challenges currently facing UC—including disruptions in billions of dollars in annual federal support, as well as uncertainty around the state budget—reasonable questions were raised in recent months about whether the University could maintain the commitment to current levels of incentive funding,” Milliken wrote in the Tuesday letter. 

    He said he considered a proposal to sunset the incentive program but ultimately decided against it. Still, he said, there may be some future changes to the program, including a potential cap on the number of incentives supported and changes to how they are distributed across system campuses. 

    “After learning more about the history and success of the program and weighing the thoughtful perspectives that have been shared, I have concluded that barring extraordinary financial setbacks, the PPFP faculty hiring incentive program will continue while the University continues to assess the program’s structure as well as its long-term financial sustainability.”

    Source link

  • Feds cannot withhold funding from UC system amid lawsuit, judge rules

    Feds cannot withhold funding from UC system amid lawsuit, judge rules

    This audio is auto-generated. Please let us know if you have feedback.

    Dive Brief:

    • A federal judge on Friday issued a preliminary injunction barring the Trump administration from freezing the University of California system’s research funding as part of civil rights investigations. 
    • In a scathing ruling, U.S. District Judge Rita Lin found the administration’s actions unconstitutional, describing “a playbook of initiating civil rights investigations of preeminent universities to justify cutting off federal funding,” with the aim of “forcing them to change their ideological tune.”
    • While a lawsuit over the Trump administration’s actions is ongoing, Lin barred the federal government from using civil rights investigations to freeze UC grant money, condition its grants on any measure that would violate recipients’ speech rights, or seek fines and other money from the system.

    Dive Insight:

     In her ruling, Lin described a “three-stage playbook” that the Trump administration uses to target universities. First, an agency involved with the administration’s Task Force to Combat Anti-Semitism announces civil rights investigations or planned enforcement actions. Then, the administration issues mass grants cancellations without following legally mandated administrative procedures, Lin wrote.

    In the third stage, Lin said, the U.S. Department of Justice demands payment of millions or billions of dollars in addition to other policy changes in return for restored funding. A DOJ spokesperson on Monday declined to comment on the lawsuit. 

    In the case of UC, the judge ruled that plaintiffs — a coalition of faculty groups and unions, including the American Association of University Professors — provided “overwhelming evidence” of the administration’s “concerted campaign to purge ‘woke,’ ‘left,’ and ‘socialist’ viewpoints from our country’s leading universities.”

    It is undisputed that this precise playbook is now being executed at the University of California,” wrote Lin, citing public statements by Leo Terrell, senior counsel in the DOJ’s civil rights wing and the head of administration’s antisemitism task force. Terrell alleged that the UC system had been “hijacked by the left” and vowed to open investigations. 

    The Trump administration did just that. In August, it froze $584 million in research funding at the University of California, Los Angeles after concluding that the institution violated civil rights law. It primarily cited UCLA’s decision to allow a 2024 pro-Palestinian protest encampment to remain on campus for almost a week before calling in the police. 

    The administration has sought a $1.2 billion penalty from UCLA to release the funds and settle the allegations. “The costs associated with this demand, if left to stand, would have far-reaching consequences,” Chancellor Julio Frenk said in a public message in August. 

    Lin noted in her Friday ruling that the administration also sought settlement terms “that had nothing to do with antisemitism,” including policy changes to how UCLA handles student protests, an adoption of the administration’s views on gender, and a review of its diversity, equity and inclusion programs.

    The administration’s campaign resulted in a significant and ongoing chilling of faculty’s actions, both in and out of the classroom, Lin said.

    In addition to teaching and conducting research differently, members of the plaintiff groups have also changed how they engage in public discourse and limited their participation in protest, Lin said. Faculty have self-censored on topics such as structural racism and scrubbed their websites of references to DEI out of fear of reprisal. 

    These are classic, predictable First Amendment harms, and exactly what Defendants publicly said that they intended,” Lin concluded.

    While acknowledging the importance of combating antisemitism, Lin said the government was “silent on what actions UCLA took to address” antisemitism issues on its campus between May of 2024, when pro-Palestinian protesters established an encampment, and July 2025, when the DOJ concluded UCLA had violated civil rights law by not doing enough to protect Jewish students from harassment.

    As part of a separate lawsuit, Lin in September ordered the National Institutes of Health and other agencies to restore suspended grants to UCLA. 

    UCLA and the UC system are just one of several prominent universities similarly targeted by the federal government. At least five institutions so far have signed deals with the Trump administration to resolve federal civil investigations. The agreements brokered by Columbia, Brown and Cornell universities require each to pay millions of dollars to the federal government, causes favored by the Trump administration or both.

    Harvard University, on the other hand, has fought back against the administration’s tactics. After repeated federal attacks, accompanied by unprecedented ultimatums, the university sued the administration and successfully had the government’s $2.2 billion funding freeze against it reversed. The Trump administration has previously stated its intent to appeal. 

    Source link

  • Measuring What Matters: A Faculty Development System That Improves Teaching Quality – Faculty Focus

    Measuring What Matters: A Faculty Development System That Improves Teaching Quality – Faculty Focus

    Source link

  • Measuring What Matters: A Faculty Development System That Improves Teaching Quality – Faculty Focus

    Measuring What Matters: A Faculty Development System That Improves Teaching Quality – Faculty Focus

    Source link

  • Algorithms aren’t the problem. It’s the classification system they support

    Algorithms aren’t the problem. It’s the classification system they support

    The Office for Students (OfS) has published its annual analysis of sector-level degree classifications over time, and alongside it a report on Bachelors’ degree classification algorithms.

    The former is of the style (and with the faults) we’ve seen before. The latter is the controversial bit, both to the extent to which parts of it represent a “new” set of regulatory requirements, and a “new” set of rules over what universities can and can’t do when calculating degree results.

    Elsewhere on the site my colleague David Kernohan tackles the regulation issue – the upshots of the “guidance” on the algorithms, including what it will expect universities to do both to algorithms in use now, and if a provider ever decides to revise them.

    Here I’m looking in detail at its judgements over two practices. Universities are, to all intents and purposes, being banned from any system which discounts credits with the lowest marks – a practice which the regulator says makes it difficult to demonstrate that awards reflect achievement.

    It’s also ruling out “best of” algorithm approaches – any universities that determine degree class by running multiple algorithms and selecting the one that gives the highest result will also have to cease doing so. Anyone still using these approaches by 31 July 2026 has to report itself to OfS.

    Powers and process do matter, as do questions as to whether this is new regulation, or merely a practical interpretation of existing rules. But here I’m concerned with the principle. Has OfS got a point? Do systems such as those described above amount to misleading people who look at degree results over what a student has achieved?

    More, not less

    A few months ago now on Radio 4’s More or Less, I was asked how Covid had impacted university students’ attainment. On a show driven by data, I was wary about admitting that as a whole, I think it would be fair to say that UK HE isn’t really sure.

    When in-person everything was cancelled back in 2020, universities scrambled to implement “no detriment” policies that promised students wouldn’t be disadvantaged by the disruption.

    Those policies took various forms – some guaranteed that classifications couldn’t fall below students’ pre-pandemic trajectory, others allowed students to select their best marks, and some excluded affected modules entirely.

    By 2021, more than a third of graduates were receiving first-class honours, compared to around 16 per cent a decade earlier – with ministers and OfS on the march over the risk of “baking in” the grade inflation.

    I found that pressure troubling at the time. It seemed to me that for a variety of reasons, providers may have, as a result of the pandemic, been confronting a range of faults with degree algorithms – for the students, courses and providers that we have now, it was the old algorithms that were the problem.

    But the other interesting thing for me was what those “safety net” policies revealed about the astonishing diversity of practice across the sector when it comes to working out the degree classification.

    For all of the comparison work done – including, in England, official metrics on the Access and Participation Dashboard over disparities in “good honours” awarding – I was wary about admitting to Radio 4’s listeners that it’s not just differences in teaching, assessment and curriculum that can drive someone getting a First here and a 2:2 up the road.

    When in-person teaching returned in 2022 and 2023, the question became what “returning to normal” actually meant. Many – under regulatory pressure not to “bake in” grade inflation – removed explicit no-detriment policies, and the proportion of firsts and upper seconds did ease slightly.

    But in many providers, many of the flexibilities introduced during Covid – around best-mark selection, module exclusions and borderline consideration – had made explicit and legitimate what was already implicit in many institutional frameworks. And many were kept.

    Now, in England, OfS is to all intents and purposes banning a couple of the key approaches that were deployed during Covid. For a sector that prizes its autonomy above almost everything else, that’ll trigger alarm.

    But a wider look at how universities actually calculate degree classifications reveals something – the current system embodies fundamentally different philosophies about what a degree represents, are philosophies that produce systematically different outcomes for identical student performance, and are philosophies that should not be written off lightly.

    What we found

    Building on David Allen’s exercise seven years ago, a couple of weeks ago I examined the publicly available degree classification regulations for more than 150 UK universities, trawling through academic handbooks, quality assurance documents and regulatory frameworks.

    The shock for the Radio 4 listener on the Clapham Omnibus would be that there is no standardised national system with minor variations, but there is a patchwork of fundamentally different approaches to calculating the same qualification.

    Almost every university claims to use the same framework for UG quals – the Quality Assurance Agency benchmarks, the Framework for Higher Education Qualifications and standard grade boundaries of 70 for a first, 60 for a 2:1, 50 for a 2:2 and 40 for a third. But underneath what looks like consistency there’s extraordinary diversity in how marks are then combined into final classifications.

    The variations cluster around a major divide. Some universities – predominantly but not exclusively in the Russell Group – operate on the principle that a degree classification should reflect the totality of your assessed work at higher levels. Every module (at least at Level 5 and 6) counts, every mark matters, and your classification is the weighted average of everything you did.

    Other universities – predominantly post-1992 institutions but with significant exceptions – take a different view. They appear to argue that a degree classification should represent your actual capability, demonstrated through your best work.

    Students encounter setbacks, personal difficulties and topics that don’t suit their strengths. Assessment should be about demonstrating competence, not punishing every misstep along a three-year journey.

    Neither philosophy is obviously wrong. The first prioritises consistency and comprehensiveness. The second prioritises fairness and recognition that learning isn’t linear. But they produce systematically different outcomes, and the current system does allow both to operate under the guise of a unified national framework.

    Five features that create flexibility

    Five structural features appear repeatedly across university algorithms, each pushing outcomes in one direction.

    1. Best-credit selection

    This first one has become widespread, particularly outside the Russell Group. Rather than using all module marks, many universities allow students to drop their worst performances.

    One uses the best 105 credits out of 120 at each of Levels 5 and 6. Another discards the lowest 20 credits automatically. A third takes only the best 90 credits at each level. Several others use the best 100 credits at each stage.

    The rationale is obvious – why should one difficult module or one difficult semester define an entire degree?

    But the consequence is equally obvious. A student who scores 75-75-75-75-55-55 across six modules averages 68.3 per cent. At universities where everything counts, that’s a 2:1. At universities using best-credit selection that drops the two 55s, it averages 75 – a clear first.

    Best-credit selection is the majority position among post-92s, but virtually absent at Russell Group universities. OfS is now pretty much banning this practice.

    The case against rests on B4.2(c) (academic regulations must be “designed to ensure” awards are credible) and B4.4(e) (credible means awards “reflect students’ knowledge and skills”). Discounting credits with lowest marks “excludes part of a student’s assessed achievement” and so:

    …may result in a student receiving a class of degree that overlooks material evidence of their performance against the full learning outcomes for the course.

    2. Multiple calculation routes

    These take that principle further. Several universities calculate your degree multiple ways and award whichever result is better. One runs two complete calculations – using only your best 100 credits at Level 6, or taking your best 100 at both levels with 20:80 weighting. You get whichever is higher.

    Another offers three complete routes – unweighted mean, weighted mean and a profile-based method. Students receive the highest classification any method produces.

    For those holding onto their “standards”, this sort of thing is mathematically guaranteed to inflate outcomes. You’re measuring the best possible interpretation of what students achieved, not what they achieved every time. As a result, comparison across institutions becomes meaningless. Again, this is now pretty much being banned.

    This time, the case against is that:

    …the classification awarded should not simply be the most favourable result, but the result that most accurately reflects the student’s level of achievement against the learning outcomes.

    3. Borderline uplift rules

    What happens on the cusps? Borderline uplift rules create all sorts of discretion around the theoretical boundaries.

    One university automatically uplifts students to the higher class if two-thirds of their final-stage credits fall within that band, even if their overall average sits below the threshold. Another operates a 0.5 percentage point automatic uplift zone. Several maintain 2.0 percentage point consideration zones where students can be promoted if profile criteria are met.

    If 10 per cent of students cluster around borderlines and half are uplifted, that’s a five per cent boost to top grades at each boundary – the cumulative effect is substantial.

    One small and specialist plays the counterfactual – when it gained degree-awarding powers, it explicitly removed all discretionary borderline uplift. The boundaries are fixed – and it argues this is more honest than trying to maintain discretion that inevitably becomes inconsistent.

    OfS could argue borderline uplift breaches B4.2(b)’s requirement that assessments be “reliable” – defined as requiring “consistency as between students.”

    When two students with 69.4% overall averages receive different classifications (one uplifted to First, one remaining 2:1) based on mark distribution patterns or examination board discretion, the system produces inconsistent outcomes for identical demonstrated performance.

    But OfS avoids this argument, likely because it would directly challenge decades of established discretion on borderlines – a core feature of the existing system. Eliminating all discretion would conflict with professional academic judgment practices that the sector considers fundamental, and OfS has chosen not to pick that fight.

    4. Exit acceleration

    Heavy final-year weighting amplifies improvement while minimising early difficulties. Where deployed, the near-universal pattern is now 25 to 30 per cent for Level 5 and 70 to 75 per cent for Level 6. Some institutions weight even more heavily, with year three counting for 60 per cent of the final mark.

    A student who averages 55 in year two and 72 in year three gets 67.2 overall with typical 30:70 weighting – a 2:1. A student who averages 72 in year two and 55 in year three gets 59.9 – just short of a 2:1.

    The magnitude of change is identical – it’s just that the direction differs. The system structurally rewards late bloomers and penalises any early starters who plateau.

    OfS could argue that 75 per cent final-year weighting breaches B4.2(a)’s requirement for “appropriately comprehensive” assessment. B4 Guidance 335M warns that assessment “focusing only on material taught at the end of a long course… is unlikely to provide a valid assessment of that course,” and heavy (though not exclusive) final-year emphasis arguably extends this principle – if the course’s subject matter is taught across three years, does minimizing assessment of two-thirds of that teaching constitute comprehensive evaluation?

    But OfS doesn’t make this argument either, likely because year weighting is explicit in published regulations, often driven by PSRB requirements, and represents settled institutional choices rather than recent innovations. Challenging it would mean questioning established pedagogical frameworks rather than targeting post-hoc changes that might mask grade inflation.

    5. First-year exclusion

    Finally, with a handful of institutional and PSRB exceptions, the first-year-not-counting is now pretty much universal, removing what used to be the bottom tail of performance distributions.

    While this is now so standard it seems natural, it represents a significant structural change from 20 to 30 years ago. You can score 40s across the board in first year and still graduate with a first if you score 70-plus in years two and three.

    Combine it with other features, and the interaction effects compound. At universities using best 105 credits at each of Levels 5 and 6 with 30:70 weighting, only 210 of 360 total credits – 58 per cent – actually contribute to your classification. And so on.

    OfS could argue first-year exclusion breaches comprehensiveness requirements – when combined with best-credit selection, only 210 of 360 total credits (58%) might count toward classification. But OfS explicitly notes this practice is now “pretty much universal” with only “a handful of institutional and PSRB exceptions,” treating it as neutral accepted practice rather than a compliance concern.

    Targeting something this deeply embedded across the sector would face overwhelming institutional autonomy defenses and would effectively require the sector to reinstate a practice it collectively abandoned over the past two decades.

    OfS’ strategy is to focus regulatory pressure on recent adoptions of “inherently inflationary” practices rather than challenging longstanding sector-wide norms.

    Institution type

    Russell Group universities generally operate on the totality-of-work philosophy. Research-intensives typically employ single calculation methods, count all credits and maintain narrow borderline zones.

    But there are exceptions. One I’ve seen has automatic borderline uplift that’s more generous than many post-92s. Another’s 2.0 percentage point borderline zone adds substantial flexibility. If anything, the pattern isn’t uniformity of rigour – it’s uniformity of philosophy.

    One London university has a marks-counting scheme rather than a weighted average – what some would say is the most “rigorous” system in England. And two others – you can guess who – don’t fit this analysis at all, with subject-specific systems and no university-wide algorithms.

    Post-1992s systematically deploy multiple flexibility features. Best-credit selection appears at roughly 70 per cent of post-92s. Multiple calculation routes appear at around 40 per cent of post-92s versus virtually zero per cent at research-intensive institutions. Several post-92s have introduced new, more flexible classification algorithms in the past five years, while Russell Group frameworks have been substantially stable for a decade or more.

    This difference reflects real pressures. Post-92s face acute scrutiny on student outcomes from league tables, OfS monitoring and recruitment competition, and disproportionately serve students from disadvantaged backgrounds with lower prior attainment.

    From one perspective, flexibility is a cynical response to metrics pressure. From another, it’s recognition that their students face different challenges. Both perspectives contain truth.

    Meanwhile, Scottish universities present a different model entirely, using GPA-based calculations across SCQF Levels 9 and 10 within four-year degree structures.

    The Scottish system is more internally standardised than the English system, but the two are fundamentally incompatible. As OfS attempts to mandate English standardisation, Scottish universities will surely refuse, citing devolved education powers.

    London is a city with maximum algorithmic diversity within minimum geographic distance. Major London universities use radically different calculation systems despite competing for similar students. A student with identical marks might receive a 2:1 at one, a first at another and a first with higher average at a third, purely over algorithmic differences.

    What the algorithm can’t tell you

    The “five features” capture most of the systematic variation between institutional algorithms. But they’re not the whole story.

    First, they measure the mechanics of aggregation, not the standards of marking. A 65 per cent essay at one university may represent genuinely different work from a 65 per cent at another. External examining is meant to moderate this, but the system depends heavily on trust and professional judgment. Algorithmic variation compounds whatever underlying marking variation exists – but marking standards themselves remain largely opaque.

    Second, several important rules fall outside the five-feature framework but still create significant variation. Compensation and condonement rules – how universities handle failed modules – differ substantially. Some allow up to 30 credits of condoned failure while still classifying for honours. Others exclude students from honours classification with any substantial failure, regardless of their other marks.

    Compulsory module rules also cut across the best-credit philosophy. Many universities mandate that dissertations or major projects must count toward classification even if they’re not among a student’s best marks. Others allow them to be dropped. A student who performs poorly on their dissertation but excellently elsewhere will face radically different outcomes depending on these rules.

    In a world where huge numbers of students now have radically less module choice than they did just a few years ago as a result of cuts, they would have reason to feel doubly aggrieved if modules they never wanted to take in the first place will now count when they didn’t last week.

    Several universities use explicit credit-volume requirements at each classification threshold. A student might need not just a 60 per cent average for a 2:1, but also at least 180 credits at 60 per cent or above, including specific volumes from the final year. This builds dual criteria into the system – you need both the average and the profile. It’s philosophically distinct from borderline uplift, which operates after the primary calculation.

    And finally, treatment of reassessed work varies. Nearly all universities cap resit marks at the pass threshold, but some exclude capped marks from “best credit” calculations while others include them. For students who fail and recover, this determines whether they can still achieve high classifications or are effectively capped at lower bands regardless of their other performance.

    The point isn’t so much that I (or OfS) have missed the “real” drivers of variation – the five features genuinely are the major structural mechanisms. But the system’s complexity runs deeper than any five-point list can capture. When we layer compensation rules onto best-credit selection, compulsory modules onto multiple calculation routes, and volume requirements onto borderline uplift, the number of possible institutional configurations runs into the thousands.

    The transparency problem

    Every day’s a school day at Wonkhe, but what has been striking for me is quite how difficult the information has been to access and compare. Some institutions publish comprehensive regulations as dense PDF documents. Others use modular web-based regulations across multiple pages. Some bury details in programme specifications. Several have no easily locatable public explanation at all.

    UUK’s position on this, I’d suggest, is a something of a stretch:

    University policies are now much more transparent to students. Universities are explaining how they calculate the classification of awards, what the different degree classifications mean and how external examiners ensure consistency between institutions.

    Publication cycles vary unpredictably, cohort applicability is often ambiguous, and cross-referencing between regulations, programme specifications and external requirements adds layers upon layers of complexity. The result is that meaningful comparison is effectively impossible for anyone outside the quality assurance sector.

    This opacity matters because it masks that non-comparability problem. When an employer sees “2:1, BA in History” on a CV, they have no way of knowing whether this candidate’s university used all marks or selected the best 100 credits, whether multiple calculation routes were available or how heavily final-year work was weighted. The classification looks identical regardless. That makes it more, not less, likely that they’ll just go on prejudices and league tables – regardless of the TEF medal.

    We can estimate the impact conservatively. Year one exclusion removes perhaps 10 to 15 per cent of the performance distribution. Best-credit selection removes another five to 10 per cent. Heavy final-year weighting amplifies improvement trajectories. Multiple calculation routes guarantee some students shift up a boundary. Borderline rules uplift perhaps three to five per cent of the cohort at each threshold.

    Stack these together and you could shift perhaps 15 to 25 per cent of students up one classification band compared to a system that counted everything equally with single-method calculation and no borderline flexibility. Degree classifications are measuring as much about institutional algorithm choices as about student learning or teaching quality.

    Yes, but

    When universities defend these features, the justifications are individually compelling. Best-credit selection rewards students’ strongest work rather than penalising every difficult moment. Multiple routes remove arbitrary disadvantage. Borderline uplift reflects that the difference between 69.4 and 69.6 per cent is statistically meaningless. Final-year emphasis recognises that learning develops over time. First-year exclusion creates space for genuine learning without constant pressure.

    None of these arguments is obviously wrong. Each reflects defensible beliefs about what education is for. The problem is that they’re not universal beliefs, and the current system allows multiple philosophies to coexist under a facade of equivalence.

    Post-92s add an equity dimension – their flexibility helps students from disadvantaged backgrounds who face greater obstacles. If standardisation forces them to adopt strict algorithms, degree outcomes will decline at institutions serving the most disadvantaged students. But did students really learn less, or attain to a “lower” standard?

    The counterargument is that if the algorithm itself makes classifications structurally easier to achieve, you haven’t promoted equity – you’ve devalued the qualification. And without the sort of smart, skills and competencies based transcripts that most of our pass/fail cousins across Europe adopt, UK students end up choosing between a rock and a hard place – if only they were conscious of that choice.

    The other thing that strikes me is that the arguments I made in December 2020 for “baking in” grade inflation haven’t gone away just because the pandemic has. If anything, the case for flexibility has strengthened as the cost of living crisis, inadequate maintenance support and deteriorating student mental health create circumstances that affect performance through no fault of students’ own.

    Students are working longer hours in paid employment to afford rent and food, living in unsuitable accommodation, caring for family members, and managing mental health conditions at record levels. The universities that retained pandemic-era flexibilities – best-credit selection, generous borderline rules, multiple calculation routes – aren’t being cynical about grade inflation. They’re recognising that their students disproportionately face these obstacles, and that a “totality-of-work” philosophy systematically penalises students for circumstances beyond their control rather than assessing what they’re actually capable of achieving.

    The philosophical question remains – should a degree classification reflect every difficult moment across three years, or should it represent genuine capability demonstrated when circumstances allow? Universities serving disadvantaged students have answered that question one way – research-intensive universities serving advantaged students have answered it another.

    OfS’s intervention threatens to impose the latter philosophy sector-wide, eliminating the flexibility that helps students from disadvantaged backgrounds show their “best selves” rather than punishing them for structural inequalities that affect their week-to-week performance.

    Now what

    As such, a regulator seeking to intervene faces an interesting challenge with no obviously good options – albeit one of its own making. Another approach might have been to cap the most egregious practices – prohibit triple-route calculations, limit best-credit selection to 90 per cent of total credits, cap borderline zones at 1.5 percentage points.

    That would eliminate the worst outliers while preserving meaningful autonomy. The sector would likely comply minimally while claiming victory, but oodles of variation would remain.

    A stricter approach would be mandating identical algorithms – but would provoke rebellion. Devolved nations would refuse, citing devolved powers and triggering a constitutional comparison. Research intensive universities would mount legal challenges on academic freedom grounds, if they’re not preparing to do so already. Post-92s would deploy equity arguments, claiming standardisation harms universities serving disadvantaged students.

    A politically savvy but inadequate approach might have been mandatory transparency rather than prescription. Requiring universities to publish algorithms in standardised format with some underpinning philosophy would help. That might preserve autonomy while creating a bit of accountability. Maybe competitive pressure and reputational risk will drive voluntary convergence.

    But universities will resist even being forced to quantify and publicise the effects of their grading systems. They’ll argue it undermines confidence and damages the UK’s international reputation.

    Given the diversity of courses, providers, students and PSRBs, algorithms also feel like a weird thing to standardise. I can make a much better case for a defined set of subject awards, a shared governance framework (including subject benchmark statements, related PSRBs and degree algorithms) than I can for tightening standardisation in isolation.

    The fundamental problem is that the UK degree classification system was designed for a different age, a different sector and a different set of students. It was probably a fiction to imagine that sorting everyone into First, 2:1, 2:2 and Third was possible even 40 years ago – but today, it’s such obvious nonsense that without richer transcripts, it just becomes another way to drag down the reputation of the sector and its students.

    Unfit for purpose

    In 2007, the Burgess Review – commissioned by Universities UK itself – recommended replacing honours degree classifications with detailed achievement transcripts.

    Burgess identified the exact problems we have today – considerable variation in institutional algorithms, the unreliability of classification as an indicator of achievement, and the fundamental inadequacy of trying to capture three years of diverse learning in a single grade.

    The sector chose not to implement Burgess’s recommendations, concerned that moving away from classifications would disadvantage UK graduates in labour markets “where the classification system is well understood.”

    Eighteen years later, the classification system is neither well understood nor meaningful. A 2:1 at one institution isn’t comparable to a 2:1 at another, but the system’s facade of equivalence persists.

    The sector chose legibility and inertia over accuracy and ended up with neither – sticking with a system that protected institutional diversity while robbing students of the ability to show off theirs. As we see over and over again, a failure to fix the roof when the sun was shining means reform may now arrive externally imposed.

    Now the regulator is knocking on the conformity door, there’s an easy response. OfS can’t take an annual pop at grade inflation if most of the sector abandons the outdated and inadequate degree classification system. Nothing in the rules seems to mandate it, some UG quals don’t use it (think regulated professional bachelors), and who knows where the White Paper’s demand for meaningful exit awards at Level 4 and 5 fit into all of this.

    Maybe we shouldn’t be surprised that a regulator that oversees a meaningless and opaque medal system with a complex algorithm that somehow boils an entire university down to “Bronze”, “Silver” Gold” or “Requires Improvement” is keen to keep hold of the equivalent for students.

    But killing off the dated relic would send a really powerful signal – that the sector is committed to developing the whole student, explaining their skills and attributes and what’s good about them – rather than pretending that the classification makes the holder of a 2:1 “better” than those with a Third, and “worse” than those with a First.

    Source link

  • Only radical thinking will deliver the integrated tertiary system the country needs

    Only radical thinking will deliver the integrated tertiary system the country needs

    The post-16 white paper was an opportunity to radically enable an education and skills ecosystem that is built around the industrial strategy, and that has real resonance with place.

    The idea that skills exist in an entirely different space to education is just wrongheaded. The opportunity comes, however, when we can see a real connection, both in principle and in practice, between further and higher education: a tertiary system that can serve students, employers and society.

    Significant foundations are already in place with the Lifelong Learning Entitlement providing sharp focus within the higher education sector and apprenticeships, now well established, and well regarded across both HE and FE. Yet we still have the clear problem that schools, FE, teaching in HE, research and knowledge transfer are fragmented across the DfE and other associated sector bodies.

    Sum of the parts

    The policy framework needs to be supported by a major and radical rethink of how the parts fit together so we can truly unlock the combined transformational power of education and innovation to raise aspirations, opportunity, attainment, and ultimately, living standards. This could require a tertiary commission of the likes of Diamond and Hazelkorn in the Welsh system in the mid-2010’s.

    Such a commission could produce bold thinking on the scale of the academies movement in schools over the last 25 years. The encouragement to bring groups of schools together has resulted in challenge, but also significant opportunity. We have seen the creation of some excellent FE college groups following an area-based review around a decade ago. The first major coming together of HE institutions is in train with Greenwich and Kent. We have seen limited pilot FE/HE mergers. Now feels like the right time for blue sky thinking that enables the best of all of those activities in a structured and purposeful way that is primarily focused on the benefits to learning and national productivity rather than simply financial necessity.

    Creating opportunities for HE, FE and schools to come together not only in partnerships, but in structural ways will enable the innovation that will create tangible change in local and regional communities. All parts of the education ecosystem face ever-increasing financial challenge. If an FE college and a university wished to offer shared services, then there would need to be competitive tender for the purposes of best value. This sounds sensible except the cost of running such a process is high. If those institutions are part of the same group, then it can be done so much more efficiently.

    FE colleges are embedded in their place and even more connected to local communities. The ability to reach into more disadvantaged communities and to take the HE classroom from the traditional university setting, is a distinct benefit. The growth in private, for-profit HE provision is often because it has a great ability to reach into specific communities. The power of FE/HE collaboration into those same communities would bring both choice and exciting possibility.

    While in theory FE and HE can merge through a section 28 application to the Secretary of State, the reality is that any activity to this point has been marginal and driven by motivation other than enhanced skills provision. If the DfE were to enable, and indeed drive, such collaboration they could create both financial efficiencies and a much greater and more coordinated offer to employers and learners.

    The industrial strategy and the growth in devolved responsibility for skills create interesting new opportunities but we must find ways that avoid a new decade of confusion for employers and learners. The announcement of new vocational qualifications, Technical Excellence Colleges and the like are to be welcomed but must be more than headlines. Learners and employers alike need to be able to see pathways and support for their lifelong skills and learning needs.

    Path to integration

    The full integration of FE and HE could create powerful regional and place-based education and skills offers. Adding in schools and creating education trusts that straddle all levels means that employers could benefit from integrated offers, less bureaucracy and clear, accelerated pathways.

    So now is the moment to develop Integrated Skills and Education Trusts (ISET): entities that sit within broad groups and benefit from the efficiencies of scale but maintaining local provision. Taking the best of FE, understanding skills and local needs and the best of HE and actively enabling them to come together.

    Our experience at Coventry, working closely and collaboratively with several FE partners, is that the barriers thrown up within the DfE are in stark and clear contrast to the policy statements of ministers and, indeed, of the Prime Minister. The post-16 white paper will only lead to real change if the policy and the “plumbing” align. The call has to be to think with ambition and to encourage and enable action that serves learners, employers and communities with an education and skills offer that is fit for the next generation.

    Source link

  • A university system reliant on international students has an obligation to understand them

    A university system reliant on international students has an obligation to understand them

    It is becoming difficult to ignore potential tension between the internationalisation of higher education and plans to cut net migration. Recent UK government policies, such as the reduction of the graduate visa from two years to 18 months, could have severe consequences for universities in Scotland.

    Scottish government funding per home student has not kept pace with inflation. To compensate for the subsequent gap in resources, universities have become more dependent on international enrolments.

    In addition, Scotland faces specific demographic challenges. By 2075, the number of working aged Scots is predicted to fall by 14.7 per cent and, without migration, the population would be in decline. Encouraging young people to remain after graduation could help to balance the ageing population. However, although the Scottish government favours a more generous post-study visa route, this is not supported by Westminster.

    Ability to adjust

    Rhetoric around internationalisation tends to emphasise positive factors such as increased diversity and cross-cultural exchange. Yet, as an English for Academic Purposes (EAP) practitioner, I have long been concerned that learners from diverse linguistic backgrounds are often viewed through a lens of deficiency. There is also a risk that their own needs will be overlooked in the midst of political and economic debate.

    To better understand how students’ sense of identity is affected by moving into new educational and social settings, I carried out interview-based research at a Scottish university. Like other “prestigious” institutions, it attracts a large number of applicants from abroad. In particular, some taught master’s degrees (such as those in the field of language education) are dominated by Chinese nationals. Indeed, when recruiting postgraduate interviewees, I was not surprised when only two (out of 11) came from other countries (Thailand and Japan).

    My analysis of data revealed typical reasons for choosing the university: ranking, reputation and the shorter duration of master’s courses. Participants described being met with unfamiliar expectations on arrival, especially as regards writing essays and contributing to discussion. For some, this challenged their previous identities as competent individuals with advanced English skills. These issues were exacerbated in “all-white” classes, where being in the minority heightened linguistic anxiety and the fear of being judged. They had varied experiences of group work: several reported – not necessarily intentional but nonetheless problematic – segregation of students by nationality, undermining the notion that a multi-national population results in close mixing on campus.

    In a survey administered to a wider cohort of respondents on a pre-sessional EAP programme, the majority agreed or strongly agreed when asked if they would befriend British people while at university.

    However, making such connections is far from straightforward. International students are sometimes criticised for socialising in monocultural groups and failing to fully “fit in”. However, the fatigue of living one’s life in another language and simultaneously coping with academic demands means that getting to know locals is not a priority. At the same time, research participants expressed regret at the lack of opportunity to interact with other nationalities, with one remarking, “if everyone around me is Chinese, why did I choose to study abroad?” Some encountered prejudice or marginalisation, reporting that they felt ignored by “fluent” speakers of English. Understandably, this had a detrimental effect on their ability to adjust.

    Different ways to belong

    To gain different perspectives, I also spoke with teachers who work with international students. EAP tutors believed that their classes offer a safe space for them to gain confidence and become used to a new way of working. However, they wondered whether there would be a similarly supportive atmosphere in mainstream university settings. Subject lecturers did not invoke phrases such as “dumbing down”, but several had altered their teaching methods to better suit learners from non-Anglophone backgrounds.

    In addition, they questioned whether internationalisation always equated to diversity. One commented on the advantages of having a “multicultural quality”, but added that it “has to be a mix” – something which is not possible if, like on her course, there are no Scottish students. Another mentioned that the propensity to “stick with your own people” is not a uniquely Chinese phenomenon, but common behaviour regardless of background.

    A few academics had noticed that most Chinese students take an attitude of, “I’m doing my (one-year) master’s and maybe then I have to move back to China.” Chinese students are less likely than some other nationalities to apply for a graduate visa, suggesting that their investment in a degree abroad is of a transactional nature.

    The majority of survey respondents indicated that they would adapt to a new way of life while living abroad. However, during my last conversation with focal interviewees, I uncovered different levels of belonging, ranging from, “I feel like I’m from Scotland”, to “my heart was always in China”, to “I don’t have any home.” Participants generally viewed their stay as temporary: in fact, all but the Japanese student (who accepted a job in the US) returned to their home country after graduation. Although they described their time in Scotland in mostly positive terms, some were disappointed that it had not provided a truly intercultural experience.

    Meltdown

    It is clear that universities in Scotland have become overly reliant on international tuition for their financial sustainability. At the same time, there is conflict between the devolved administration’s depiction of Scotland as outward looking and welcoming, and the reality of stricter migration policies over which it has no control.

    Discourses which position international students as outsiders who add to high immigration numbers could deter some from coming. If they are seen only as economic assets, their own cultural capital and agency might be neglected. It is also important to problematise the notion of “integration”: even my small study suggests that there are different ways of belonging. No group of learners is homogeneous: even if they come from the same country, individual experiences will differ.

    To navigate the current financial crisis, Scottish universities need to do everything possible to maintain their appeal. With elections being held next year, higher education policy will continue to be a key area of discussion. At present, there are no plans to introduce fees for home students, making revenue from international tuition all the more essential.

    However, at a time of global uncertainty, taking overseas students for granted feels enormously unwise. Instead, it is crucial to ask how they can be made to feel like valued members of the academic community. The answer to this question might be different for everyone, but engaging with students themselves, rather than relying on unhelpful assumptions, would be a start.

    Source link

  • Can regulation cope with a unified tertiary system in Wales?

    Can regulation cope with a unified tertiary system in Wales?

    Medr’s second consultation on its regulatory framework reminds us both of the comparatively small size of the Welsh tertiary sector, and the sheer ambition – and complexity – of bringing FE, HE, apprenticeships and ACL under one roof.

    Back in May, Medr (the official name for the Commission for Tertiary Education and Research in Wales) launched its first consultation on the new regulatory system required by the Tertiary Education and Research Wales Act 2022.

    At that stage the sector’s message was that it was too prescriptive, too burdensome, and insufficiently clear about what was mandatory versus advisory.

    Now, five months later, Medr has returned with a second consultation that it says addresses those concerns. The documents – running to well over 100 pages across the main consultation text and six annexes – set out pretty much the complete regulatory framework that will govern tertiary education in Wales from August 2026.

    It’s much more than a minor technical exercise – it’s the most ambitious attempt to create a unified regulatory system across further education, higher education, apprenticeships, adult community learning and maintained school sixth forms that the UK has yet seen.

    As well as that, it’s trying to be both a funder and a regulator; to be responsive to providers while putting students at the centre; and to avoid some of the mistakes that it has seen the Office for Students (OfS) make in England.

    Listening and responding

    If nothing else, it’s refreshing to see a sector body listening to consultation responses. Respondents wanted clearer signposts about what constitutes a compliance requirement versus advisory guidance, and worried about cumulative burden when several conditions and processes come together.

    They also asked for alignment with existing quality regimes from Estyn and the Quality Assurance Agency, and flagged concerns about whether certain oversight might risk universities’ status as non-profit institutions serving households (NPISH) – a technical thing, but one with significant implications for institutional autonomy.

    Medr’s response has been to restructure the conditions more clearly. Each now distinguishes between the condition itself (what must be met), compliance requirements that evidence the condition, and guidance (which providers must consider but may approach differently if they can justify that choice).

    It has also adopted a “make once, use many” approach to information, promising to rely on evidence already provided to Estyn, QAA or other bodies wherever it fits their purpose. And it has aligned annual planning and assurance points with sector cycles “wherever possible.”

    The question, of course, is whether this constitutes genuine simplification or merely better-organised complexity. Medr is establishing conditions of registration for higher education providers (replacing Fee and Access Plans), conditions of funding for FE colleges and others, and creating a unified quality framework and learner engagement code that applies across all tertiary education.

    The conditions themselves

    Some conditions apply universally. Others apply only to registered providers, or only to funded providers, or only to specific types of provision. As we’ve seen in England, the framework includes initial and ongoing conditions of registration for higher education providers (in both the “core” and “alternative” categories), plus conditions of funding that apply more broadly.

    Financial sustainability requires providers to have “strategies in place to ensure that they are financially sustainable” – which means remaining viable in the short term (one to two years), sustainable over the medium term (three to five years), and maintaining sufficient resources to honour commitments to learners. The supplementary detail includes a financial commitments threshold mechanism based on EBITDA ratios.

    Providers exceeding certain multiples will need to request review of governance by Medr before entering new financial commitments. That’s standard regulatory practice – OfS has equivalent arrangements in England – but it represents new formal oversight for Welsh institutions.

    Critically, Medr says its role is “to review and form an opinion on the robustness of governance over proposed new commitments, not to authorise or veto a decision that belongs to your governing body.” That’s some careful wording – but whether it will prove sufficient in practice (both in detail and in timeliness) when providers are required to seek approval before major financial decisions remains to be seen.

    Governance and management is where the sector seems to have secured some wins. The language around financial commitments has been softened from “approval” to “review.” The condition now focuses on outcomes – “integrity, transparency, strong internal control, effective assurance, and a culture that allows challenge and learning” – rather than prescribing structures.

    And for those worried about burden, registered higher education providers will no longer be required to provide governing body composition, annual returns of serious incidents, individual internal audit reports, or several other elements currently required under Fee and Access Plans. That is a reduction – but won’t make a lot of difference to anyone other than the person stiffed with gathering the sheaf of stuff to send in.

    Quality draws on the Quality Framework (Annex C) and requires providers to demonstrate their provision is of good quality and that they engage with continuous improvement. The minimum compliance requirements, evidenced through annual assurance returns, include compliance with the Learner Engagement Code, using learner survey outcomes in quality assurance, governing body oversight of quality strategies, regular self-evaluation, active engagement in external quality assessment (Estyn inspection and/or QAA review), continuous improvement planning, and a professional learning and development strategy.

    The framework promises that Medr will “use information from existing reviews and inspections, such as by Estyn and QAA” and “aim not to duplicate existing quality processes.” Notably, Medr has punted the consultation on performance indicators to 2027, so providers won’t know what quantitative measures they’ll be assessed against until the system is already live.

    Staff and learner welfare sets out requirements for effective arrangements to support and promote welfare, encompassing both “wellbeing” (emotional wellbeing and mental health) and “safety” (freedom from harassment, misconduct, violence including sexual violence, and hate crime). Providers will have to conduct an annual welfare self-evaluation and submit an annual welfare action plan to Medr. This represents new formal reporting – even if the underlying activity isn’t new.

    The Welsh language condition requires providers to take “all reasonable steps” to promote greater use of Welsh, increase demand for Welsh-medium provision, and (where appropriate) encourage research and innovation activities supporting the Welsh language. Providers must publish a Welsh Language Strategy setting out how they’ll achieve it, with measurable outcomes over a five-year rolling period with annual milestones. For providers subject to Welsh Language Standards under the Welsh Language (Wales) Measure 2011, compliance with those standards provides baseline assurance. Others must work with the Welsh Language Commissioner through the Cynnig Cymraeg.

    Learner protection plans will be required when Medr gives notice – typically triggered by reportable events, course closures, campus closures, or significant changes to provision. The guidance (in the supplementary detail from page 86 onwards) is clear about what does and doesn’t require a plan. Portfolio review and planned teach-out? Generally fine, provided learners are supported. Closing a course mid-year with no teach-out option? Plan required. Whether this offers the sort of protection that students need – especially when changes are made to courses to reduce costs – will doubtless come up in the consultation.

    And then there’s the Learner Engagement Code, set out in Annex D. This is where student representative bodies may feel especially disappointed. The Code is principles-based rather than rights-based, setting out nine principles (embedded, valued, understood, inclusive, bilingual, individual and collective, impactful, resourced, evaluated) – but creates no specific entitlements or rights for students or students’ unions.

    The principles themselves are worthy enough – learners should have opportunities to engage in decision-making, they should be listened to, routes for engagement should be clear, opportunities should reflect diverse needs, learners can engage through Welsh, collective voice should be supported, engagement should lead to visible impact, it should be resourced, and it should be evaluated. But it does all feel a bit vague.

    Providers will have to submit annual assurance that they comply with the Code, accompanied by evidence such as “analysis of feedback from learners on their experience of engagement” and “examples of decisions made as a result of learner feedback.” But the bar for compliance appears relatively low. As long as providers can show they’re doing something in each area, they’re likely to be deemed compliant. For SUs hoping for statutory backing for their role and resources, this will feel like a missed opportunity.

    Equality of opportunity is more substantial. The condition requires providers to deliver measurable outcomes across participation, retention, academic success, progression, and (where appropriate) participation in postgraduate study and research. The supplementary detail (from page 105) sets out that providers must conduct ongoing self-evaluation to identify barriers to equality of opportunity, then develop measurable outcomes over a five-year rolling period with annual milestones.

    Interestingly, there’s a transition period – in 2026-27, HE providers with Fee and Access Plans need only provide a statement confirming continued commitments. Full compliance – including submission of measurable outcomes – isn’t required until 2027-28, with the first progress reports due in 2028-29. That’s a sensible approach given the sector’s starting points vary considerably, but it does mean the condition won’t bite with full force for three years.

    Monitoring and intervention

    At the core of the monitoring approach is an Annual Assurance Return – where the provider’s governing body self-declares compliance across all applicable conditions, supported by evidence. This is supplemented by learner surveys, Estyn/QAA reviews, public information monitoring, complaints monitoring, reportable events, data monitoring, independent assurance, engagement activities and self-evaluation.

    The reportable events process distinguishes between serious incidents (to be reported within 10 working days) and notifiable events (reported monthly or at specified intervals). There’s 17 categories of serious incidents, from loss of degree awarding powers to safeguarding failures to financial irregularities over £50,000 or two per cent of turnover (whichever is lower). A table lists notifiable events including senior staff appointments and departures, changes to validation arrangements, and delays to financial returns. It’s a consolidation of existing requirements rather than wholesale innovation, but it’s now formalised across the tertiary sector rather than just HE.

    Medr’s Statement of Intervention Powers (Annex A) sets out escalation from low-level intervention (advice and assistance, reviews) through mid-level intervention (specific registration conditions, enhanced monitoring) to serious “directive” intervention (formal directions) and ultimately de-registration. The document includes helpful flowcharts showing the process for each intervention type, complete with timescales and decision review mechanisms. Providers can also apply for a review by an independent Decision Reviewer appointed by Welsh Ministers – a safeguard that universities dream of in England.

    Also refreshingly, Medr commits to operating “to practical turnaround times” when reviewing financial commitments, with the process “progressing in tandem with your own processes.” A six-week timeline is suggested for complex financing options – although whether this proves workable in practice will depend on Medr’s capacity and responsiveness.

    Quality

    The Quality Framework (Annex C) deserves separate attention because it’s genuinely attempting something ambitious – a coherent approach to quality across FE, HE, apprenticeships, ACL and sixth forms that recognises existing inspection/review arrangements rather than duplicating them.

    The framework has seven “pillars” – learner engagement, learner voice, engagement of the governing body, self-evaluation, externality, continuous improvement and professional learning and development. Each pillar sets out what Medr will do and what providers must demonstrate. Providers will be judged compliant if they achieve “satisfactory external quality assessment outcomes,” have “acceptable performance data,” and are not considered by Medr to demonstrate “a risk to the quality of education.”

    The promise is that:

    …Medr will work with providers and with bodies carrying out external quality assessment to ensure that such assessment is robust, evidence-based, proportionate and timely; adds value for providers and has impact in driving improvement.

    In other words, Estyn inspections and QAA reviews should suffice, with Medr using those outcomes rather than conducting its own assessments. But there’s a caveat:

    “Medr has asked Estyn and QAA to consider opportunities for greater alignment between current external quality assessment methodologies, and in particular whether there could be simplification for providers who are subject to multiple assessments.

    So is the coordination real or aspirational? The answer appears to be somewhere in between. The framework acknowledges that by 2027, Medr expects to have reviewed data collection arrangements and consulted on performance indicators and use of benchmarking and thresholds. Until that consultation happens, it’s not entirely clear what “acceptable performance data” means beyond existing Estyn/QAA judgements. And the promise of “greater alignment” between inspection methodologies is a promise, not a done deal.

    A tight timeline

    The key dates bear noting because they’re tight:

    • April 2026: Applications to the register open
    • August 2026: Register launches; most conditions come into effect
    • August 2027: Remaining conditions (Equality of Opportunity and Fee Limits for registered providers) come into full effect; apprenticeship providers fully subject to conditions of funding

    After all these years, we seem to be looking at some exit acceleration. It gives providers approximately six months from the consultation closing (17 December 2025) to the application process opening. Final versions of the conditions and guidance presumably need to be published early 2026 to allow preparation time. And all of this is happening against the backdrop of Senedd elections in 2026 – where polls suggest that some strategic guidance could be dropped on the new body fairly sharpish.

    And some elements remain unresolved or punted forward. The performance indicators consultation promised for 2027 means providers won’t know the quantitative measures against which they’ll be assessed until the system is live. Medr says it will “consult on its approach to defining ‘good’ learner outcomes” as part of a “coherent, over-arching approach” – but that’s after registration and implementation have begun.

    Validation arrangements are addressed (providers must ensure arrangements are effective in enabling them to satisfy themselves about quality), but the consultation asks explicitly whether the condition “could be usefully extended into broader advice or guidance for tertiary partnerships, including sub-contractual arrangements.” That suggests Medr has been reading some of England’s horror stories and recognises the area needs further work.

    And underlying everything is the question of capacity – both Medr’s capacity to operate this system effectively from day one, and providers’ capacity to meet the requirements while managing their existing obligations. The promise of reduced burden through alignment and reuse of evidence is welcome.

    But a unified regulatory system covering everything from research-intensive universities to community-based adult learning requires Medr to develop expertise and processes across an extraordinary range of provision types. Whether the organisation will be ready by August 2026 is an open question.

    For providers, the choice is whether to engage substantively with this consultation knowing that the broad architecture is set by legislation, or to focus energy on preparing for implementation. For Welsh ministers, the challenge is whether this genuinely lighter-touch, more coherent approach than England’s increasingly discredited OfS regime can be delivered without compromising quality or institutional autonomy.

    And for students – especially those whose representative structures were hoping for statutory backing – there’s a question about whether principles-based engagement without rights amounts to meaningful participation or regulatory box-ticking.

    In England, some observers will watch with interest to see whether Wales has found a way to regulate tertiary education proportionately and coherently. Others will see in these documents a reminder that unified systems, however well-intentioned, require enormous complexity to accommodate the genuine diversity of the sector. The consultation responses, due by 17 December, will expose which interpretation the Welsh sector favours.

    Source link

  • Court temporarily blocks overnight ban on expression at University of Texas System

    Court temporarily blocks overnight ban on expression at University of Texas System

    Dive Brief:

    • A federal judge on Tuesday temporarily blocked University of Texas System officials from enforcing a state law that bans free speech and expression on public campuses between the hours of 10 p.m. and 8 a.m.
    • The Foundation for Individual Rights and Expression sued leaders of the UT system in September on behalf of student groups who argued the law violated their First Amendment rights.
    • U.S. District Judge David Alan Ezra, a Reagan appointee, found that plaintiffs raised “significant First Amendment issues” with the law and its application, and he granted a preliminary injunction on enforcement while the case plays out.

    Dive Insight:

    Texas passed SB 2972, earlier this year in the wake of 2024’s wave of pro-Palestinian protests on U.S. campuses.

    “In April 2024, universities across the nation saw massive disruption on their campus,” state Sen. Brandon Creighton, the primary author of the bill, wrote in a statement of intent. “Protesters erected encampments in common areas, intimidated other students through the use of bullhorns and speakers, and lowered American flags with the intent of raising the flag of another nation.”

    In late September, Creighton, was named chancellor and CEO of the Texas Tech University System. 

    Along with specifically prohibiting First Amendment-protected activity overnight, the law also bars the campus community from inviting speakers to campus, using devices to amplify speech and playing drums or other percussive instruments during the last two weeks of any term. 

    In its complaint, FIRE called the law “blatantly unconstitutional.” 

    “The First Amendment doesn’t set when the sun goes down,” FIRE senior supervising attorney JT Morris said in a September statement. “University students have expressive freedom whether it’s midnight or midday, and Texas can’t just legislate those constitutional protections out of existence.”

    Ezra agreed in his ruling. 

    “The First Amendment does not have a bedtime of 10:00 p.m.,” the judge wrote. “The burden is on the government to prove that its actions are narrowly tailored to achieve a compelling governmental interest. It has not done so.”

    In his ruling, Ezra wrote that the law’s free speech restrictions were not content-neutral and so must survive a strict legal test for the government to show that the law is the least restrictive possible to achieve a “compelling” goal. 

    The judge pointed to public posts by Texas Gov. Greg Abbott and the bill’s statement of intent, both decrying the pro-Palestinian protests. Abbott described the protests as antisemitic and called for the arrest and expulsion of protestors.

    “The statute is content-based both on its face and by looking to the purpose and justification for the law,” Ezra wrote. 

    Ezra also highlighted that the statute carved out an exception for commercial speech in his ruling. 

    “Defendants betray the stated goal of preventing disruption and ensuring community safety by failing to expand the Bans to commercial speech,” he wrote. “Students can engage in commercial speech that would otherwise violate the Bans simply because it is not ‘expressive activities,’ no matter how disruptive.”

    In response to the law, the University of Texas at Austin adopted a more limited version of the policy that only banned overnight expressive activities in its common outdoor area that generate sound to be heard from a university residence. 

    However, Ezra concluded the pared-down policy wasn’t enough to protect students’ constitutional speech rights, as UT-Austin could change it or enforce it subjectively. 

    “The threat of prosecution arises not only from UT’s adopted policy but also from the legislative statute,” the judge wrote. “As adopted, UT Austin is not currently in compliance with the statute, and at any point could change or be instructed to change its policies to comply with the law.”

    FIRE cheered the injunction on Tuesday. 

    “We’re thankful that the court stepped in and halted a speech ban that inevitably would’ve been weaponized to censor speech that administrators disagreed with,” FIRE Senior Attorney Adam Steinbaugh said in a statement. 

    In its lawsuit, the free speech group has asked the judge to permanently block the law’s enforcement.

    Source link

  • A joined up post-16 system requires system-level thinking combined with local action

    A joined up post-16 system requires system-level thinking combined with local action

    There have been so many conversations and speculations and recommendations aired about the forthcoming post-16 skills and education white paper that you’d be forgiven for thinking it already had been published months ago.

    But no, it’s expected this week some time – possibly as early as Monday – and so for everyone’s sanity it’s worth rehearsing some of the framing drivers and intentions behind it, clearing the deck before the thing finally arrives and we start digesting the policy detail.

    The policy ambition is clear: a coherent and coordinated post-16 “tertiary” sector in England, that offers viable pathways to young people and adult learners through the various levels of education and into employment, contributing to economic growth through providing the skilled individuals the country needs.

    The political challenge is also real: with Reform snapping at Labour’s heels, the belief that the UK can “grow its own” skills, and offer opportunity and the prospect of economic security to its young people across the country must become embedded in the national psyche if the government is to see off the threat.

    The politics and policy combine in the Prime Minister’s announcement at Labour Party Conference of an eye-catching new target for two thirds of young people to participate in some form of higher-level learning. That positions next week’s white paper as a longer term systemic shift rather than, say, a strategy for tackling youth unemployment in this parliament – though it’s clear there is also an ambition for the two to go hand in hand, with skills policy now sitting across both DfE and DWP.

    Insert tab a into slot b

    The aspiration to achieve a more joined up and functioning system is laudable – in the best of all possible worlds steering a middle course between the worst excesses and predatory behaviours of the free market, and an overly controlling hand from Whitehall. But the more you try to unpick what’s happening right now, the more you see how fragmented the current “system” is, with incentives and accountabilities all over the place. That’s why you can have brilliant FE and HE institutions delivering life-changing education opportunities, at the same time as the system as a whole seems to be grinding its gears.

    Last week, a report from the Association of Colleges and Universities UK Delivering a joined-up post-16 skills system showcased some of the really great regional collaborations already in place between FE colleges and universities, and also set out some of the barriers to collaboration including financial pressures causing different providers to chase the same students in the same subjects rather than strategically differentiating their offer; and different regulatory and student finance systems for different kinds of learners and qualifications creating complexity in the system.

    But it’s not only about the willingness and capability of different kinds of provider to coordinate with each other. It’s about the perennial urge of policymakers to tinker with qualifications and set up new kinds of provider creating additional complexity – and the complicating role of private training and HE provision operating “close to market” which can have a distorting effect on what “public” institutions are able to offer. It’s about the lack of join-up even within government departments, never mind across them. It’s also about the pervasiveness of the cultural dichotomy (and hierarchy) between perceptions of white-collar/professional and blue-collar/manual work, and the ill-informed class distinctions and capability-based assumptions underpinning them.

    Some of this fragmentation can be addressed through system-wide harmonisation – such as the intent through the Lifelong Learning Entitlement (LLE) to implement one system of funding for all level 4–6 courses, and bringing all courses in that group under the regulatory purview of the Office for Students. AoC and UUK have also identified a number of areas where potential overlaps could be resolved through system-wide coordination: between OfS, Skills England, and mayoral strategic authorities; between the LLE and the Growth and Skills Levy; and between local skills improvement plans and the (national) industrial strategy. It would be odd indeed if the white paper did not make provision for this kind of coordination.

    But even with efforts to coordinate and harmonise, in any system there is naturally occurring variation – in how employers in different industries are thinking about, reporting, and investing in skills, and at what levels, in the expectations and tolerance of different prospective students for study load, learning environment, scale of the costs of learning, and support needs, and in the relationship between a place, its economy and its people. The implications of those variations are best understood by the people who are closest to the problem.

    The future is emergent

    Complex systems have emergent properties, ie the stuff that happens because lots of actors responded to the world as they saw it but that could not necessarily have been predicted. Policy is always generating unforeseen outcomes. And it doesn’t matter how many data wonks and uber-brains you have in the Civil Service, they’ll still not be able to plot every possible outcome as any given policy intervention works its way through the system.

    So for a system to work you need good quality feedback loops in which insight arrives in a timely way on the desks of responsible actors who have the capability, opportunity and motivation to adapt in light of them. In the post-16 system that’s about education and civic leaders being really good at listening to their students, their communities and to employers – and investing in quality in civic leadership (and identifying and ejecting bad apples) should be one of the ways that a post-16 skills system can be made to work.

    But good leaders need to be afforded the opportunity to decide what their response will be to the specifics of the needs they have identified and be trusted, to some degree, to act in the public interest. So from a Whitehall perspective the question the white paper needs to answer is not only how the different bits of the system ought to join up, but whether the people who are instrumental in making it work themselves have the skills, information and flexibility to take action when it inevitably doesn’t.

    Source link