Category: Regulation

  • What’s in the new Office for Students strategy?

    What’s in the new Office for Students strategy?

    The Office for Students began a consultation process on its 2025-30 strategy back in December 2024. Alongside the usual opportunities for written responses there have been a series of “feedback events” promoted specifically to higher education provider staff, FE college staff, and students and student representatives held early in 2025.

    In the past OfS has faced arguably justified criticism for failing to take sector feedback on proposals into account – but we should take heart that there are significant differences between what was originally proposed and what has just been finalised and published.

    Graphic design is our passion

    Most strikingly, we are presented with four new attitudes that we are told will “drive delivery of all our strategic goals in the interest of students” – to hammer the point home individual activities in the “roadmap” are labelled with coloured, hexagonal, markers where “a particular activity will exemplify certain attitudes”. We get:

    • Ambitious for all students from all backgrounds (an upward arrow in a pink hexagon)
    • Collaborative in pursuit of our priorities and in our stewardship of the sector (two stylised hands in the shape of a heart, yellow hexagon)
    • Vigilant about safeguarding public money and student fees (A pound-sign on a teal hexagonal background)
    • Vocal that higher education is a force for good, for individuals, communities and the country (a stylised face and soundwave on a purple hexagon)

    Where things get potentially confusing is that the three broadly unchanged strategic goals – quality (tick, yellow circle), sector resilience (shield, blue circle), student experience and support (someone carrying an iPad, red circle) – are underpinned both by the attitude and the concept of “equality of opportunity” (teal ourobouros arrow). The only change at this conceptual level is that “the wider student interest” is characterised as “experience and support”. Don’t worry – the subsections of these are the same as in the consultations

    Fundamentally, OfS’ design language is giving openness and transparency, with a side order of handholding through what amounts to a little bit of a grab-bag of a list of interventions. The list is pared down from the rather lengthy set of bullet points initially presented, and there are some notable changes.

    Quality

    In the quality section what has been added is an assurance that OfS will do this “in collaboration with students, institutions, and sector experts”, and a commitment to “celebrate and share examples of excellence wherever we find them”. These are of course balanced with the corresponding stick: “Where necessary, we will pursue investigation and enforcement, using the full range of our powers.” This comes alongside clarification that the new quality system would be build on, rather than alongside the TEF.

    What is gone is the Quality Risk Register. An eminently sensible addition to the OfS armoury of risk registers, the vibes from the consultation were that providers were concerned that it might become another arm of regulation rather than a helpful tool for critical reflection

    Also absent from the final strategy is any mention of exploring alignment with European quality standards, which featured in the consultation materials. Similarly, the consultation’s explicit commitment to bring transnational education into the integrated quality model has not been restated – it’s unclear whether this reflects a change in priority or simply different drafting choices.

    Students

    In the section on students, language about consumer rights is significantly softened, with much more on supporting students in understanding their rights and correspondingly less on seeking additional powers to intervene on these issues. Notably absent are the consultation’s specific commitments – the model student contract, plans for case-report publication, and reciprocal intelligence sharing. The roadmap leans heavily into general “empowerment” language rather than concrete regulatory tools. And, for some reason, language on working with the Office for the Independent Adjudicator has disappeared entirely.

    A tweak to language clarifies that OfS are no longer keen to regulate around extra-curricular activity – there will be “non-regulatory” approaches however.

    New here is a commitment to “highlight areas of concern or interest that may not be subject to direct regulation but which students tell us matter to them”. The idea here looks to be that OfS can support institutions to respond proactively working with sector agencies and other partners. It is pleasing to see a commitment to this kind of information sharing (I suspect this is where OIA has ended up) – though a commitment to continue to collect and publish data on the prevalence of sexual misconduct in the draft appears not to have made the final cut.

    Resilience

    The “navigation of an environment of increased financial and strategic risks” has been a key priority of OfS over most of the year since this strategy was published – and what’s welcome here is clearer drafting and a positive commitment to working with providers to improve planning for potential closures, and that OfS will “continue to work with the government to address the gaps in the system that mean that students cannot be adequately protected if their institution can no longer operate”.

    Governance – yes, OfS will not only consider an enhanced focus, it will strengthen its oversight on governance. That’s strategic action right there. Also OfS will “work with government on legislative solutions that would stop the flow of public money when we [OfS, DfE, SLC] have concerns about its intended use.”

    Also scaled back is the consultation’s programmatic approach to governance reform. Where the consultation linked governance capability explicitly to equality and experience outcomes, the final version frames this primarily as assurance and capability support rather than a reform agenda. The shift suggests OfS moving toward a lighter-touch, collaborative posture on governance rather than directive intervention.

    Regulation

    OfS will now “strive to deliver exemplary regulation”, and interestingly the language on data has shifted from securing “modern real-time data” to “embedding the principle collect once, use many times” and a pleasing promise to work with other regulators and agencies to avoid duplication.

    Two other consultation commitments have been quietly downgraded. The explicit language on working with Skills England to develop a shared view of higher education’s role in meeting regional and national skills needs has disappeared – odd given the government’s focus on this agenda. And while the Teaching Excellence Framework remains present, the consultation’s push to make TEF “more routine and more widespread” has been cooled – the final version steps back from any commitments on cadence or coverage.

    What’s missing within the text of the strategy, despite being in the consultation version, are the “I statements” – these are what Debbie McVitty characterised on Wonkhe as:

    intended to describe what achieving its strategic objectives will look and feel like for students, institutions, taxpayers and employers in a clear and accessible way, and are weighted towards students, as the “primary beneficiaries” of the proposed strategy.

    These have been published, but separately and with a few minor revisions. Quite what status they have is unclear:

    The ‘I statements’ are a distillation of our objectives, as set out in our strategy. They are not regulatory tools. We will not track the performance of universities and colleges against them directly.

    Source link

  • The Office for Students steps on to shaky ground in an attempt to regulate academic standards

    The Office for Students steps on to shaky ground in an attempt to regulate academic standards

    The funny thing about the story about today’s intervention by the Office for Students is that it is not really about grade inflation, or degree algorithms.

    I mean, it is on one level: we get three investigation reports on providers related to registration condition B4, and an accompanying “lessons learned” report that focuses on degree algorithms.

    But the central question is about academic standards – how they are upheld, and what role an arm of the government has in upholding them.

    And it is about whether OfS has the ability to state that three providers are at “increased risk” of breaching a condition of registration on the scant evidence of grade inflation presented.

    And it is certainly about whether OfS is actually able to dictate (or even strongly hint at its revealed preferences on) the way degrees are awarded at individual providers, or the way academic standards are upheld.

    If you are looking for the rule book

    Paragraph 335N(b) of the OfS Regulatory Framework is the sum total of the advice it has offered before today to the sector on degree algorithms.

    The design of the calculations that take in a collection of module marks (each assessed carefully against criteria set out in the module handbook, and cross-checked against the understanding of what expectations of students should be offered by an academic from another university) into an award of a degree at a given classification is a potential area of concern:

    where a provider has changed its degree classification algorithm, or other aspects of its academic regulations, such that students are likely to receive a higher classification than previous students without an increase in their level of achievement.

    These circumstances could potentially be a breach of condition of registration B4, which relates to “Assessment and Awards” – specifically condition B4.2(c), which requires that:

    academic regulations are designed to ensure that relevant awards are credible;

    Or B4.2(e), which requires that:

    relevant awards granted to students are credible at the point of being granted and when compared to those granted previously

    The current version of condition B4 came into force in May 2022.

    In the mighty list of things that OfS needs to have regard to that we know and love (section 2 of the 2017 Higher Education and Research Act), we learn that OfS has to pay mind to “the need to protect the institutional autonomy of English higher education providers” – and, in the way it regulates that it should be:

    Transparent, accountable, proportionate, and consistent and […] targeted only at cases where action is needed

    Mutant algorithms

    With all this in mind, we look at the way the regulator has acted on this latest intervention on grade inflation.

    Historically the approach has been one of assessing “unexplained” (even once, horrifyingly, “unwarranted”) good honours (1 or 2:1) degrees. There’s much more elsewhere on Wonkhe, but in essence OfS came up with its own algorithm – taking into account the degrees awarded in 2010-11 and the varying proportions students in given subject areas, with given A levels and of a given age – that starts from the position that non-traditional students shouldn’t be getting as many good grades as their (three good A level straight from school) peers, and if they did then this was potentially evidence of a problem.

    To quote from annex B (“statistical modelling”) of last year’s release:

    “We interact subject of study, entry qualifications and age with year of graduation to account for changes in awarding […] our model allows us to statistically predict the proportion of graduates awarded a first or an upper second class degree, or a first class degree, accounting for the effects of these explanatory variables.

    When I wrote this up last year I did a plot of the impact each of these variables is expected to have on – the fixed effect coefficient estimates show the increase (or decrease) in the likelihood of a person getting a first or upper second class degree.

    [Full screen]

    One is tempted to wonder whether the bit of OfS that deals with this issue ever speaks to the bit that is determined to drive out awarding gaps based on socio-economic background (which, as we know, very closely correlates with A level results). This is certainly one way of explaining why – if you look at the raw numbers – the people who award more first class and 2:1 degrees are the Russell Group, and at small selective specialist providers.

    [Full screen]

    Based on this model (which for 2023-24 failed to accurately predict fully fifty per cent of the grades awarded) OfS selected – back in 2022(!) – three providers where it felt that the “unexplained” awards had risen surprisingly quickly over a single year.

    What OfS found (and didn’t find)

    Teesside University was not found to have ever been in breach of condition B4 – OfS was unable to identify statistically significant differences in the proportion of “good” honours awarded to a single cohort of students if it applied each of the three algorithms Teesside has used over the past decade or so. There has been – we can unequivocally say – no evidence of artificial grade inflation at Teesside University.

    St Mary’s University, Twickenham and the University of West London were found to have historically been in breach of condition B4. The St Mary’s issue related to an approach that was introduced in 2016-17 and was replaced in 2021-22, in West London the offending practice was introduced in 2015-16 and replaced in 2021-22. In both cases, the replacement was made because of an identified risk of grade inflation. And for each provider a small number of students may have had their final award calculated using the old approach since 2021-22, based on a need to not arbitrarily change an approach that students had already been told about.

    To be clear – there is no evidence that either university has breached condition B4 (not least because condition B4 came into force after the offending algorithms had been replaced). In each instance the provider in question has made changes based on the evidence it has seen that an aspect of the algorithm is not having the desired effect, exactly the way in which assurance processes should (and generally do) work.

    Despite none of the providers in question currently being in breach of B4 all three are now judged to be at an increased risk of breaching condition B4.

    No evidence has been provided as to why these three particular institutions are at an “increased risk” of a breach while others who may use substantially identical approaches to calculating final degree awards (but have not been lucky enough to undergo an OfS inspection on grade inflation) are not. Each is required to conduct a “calibration exercise” – basically a review of their approach to awarding undergraduate degrees of the sort each has already completed (and made changes based on) in recent years.

    Vibes-based regulation

    Alongside these three combined investigation/regulatory decision publications comes a report on Batchelors’ degree classification algorithms. This purports to set out the “lessons learned” from the three reports, but it actually sets up what amounts to a revision to condition B4.

    We recognise that we have not previously published our views relating to the use of algorithms in the awarding of degrees. We look forward to positive engagement with the sector about the contents of this report. Once the providers we have investigated have completed the actions they have agreed to undertake, we may update it to reflect the findings from those exercises.

    The important word here is “views”. OfS expresses some views on the design of degree algorithms, but it is not the first to do so and there are other equally valid views held by professional bodies, providers, and others – there is a live debate and a substantial academic literature on the topic. Academia is the natural home of this kind of exchange of views, and in the crucible of scholarly debate evidence and logical consistency are winning moves. Having looked at every algorithm he could find, Jim Dickinson covers the debates over algorithm characteristics elsewhere on the site.

    It does feel like these might be views expressed ahead of a change to condition B4 – something that OfS does have the power to do, but would most likely (in terms of good regulatory practice, and the sensitive nature of work related to academic standards managed elsewhere in the UK by providers themselves) be subject to a full consultation. OfS is suggesting that it is likely to find certain practices incompatible with the current B4 requirements – something which amounts to a de facto change in the rules even if it has been done under the guise of guidance.

    Providers are reminded that (as they are already expected to do) they must monitor the accuracy and reliability of current and future degree algorithms – and there is a new reportable event: providers need to tell OfS if they change their algorithm that may result in an increase of “good” honours degrees awarded.

    And – this is the kicker – when they do make these changes, the external calibration they do cannot relate to external examiner judgements. The belief here is that external examiners only ever work at a module level, and don’t have a view over an entire course.

    There is even a caveat – a provider might ask a current or former external examiner to take an external look at their algorithm in a calibration exercise, but the provider shouldn’t rely solely on their views as a “fresh perspective” is needed. This reads back to that rather confusing section of the recent white paper about “assessing the merits of the sector continuing to use the external examiner system” while apparently ignoring the bit around “building the evidence base” and “seeking employers views”.

    Academic judgement

    Historically, all this has been a matter for the sector – academic standards in the UK’s world-leading higher education sector have been set and maintained by academics. As long ago as 2019 the UK Standing Committee for Quality Assessment (now known as the Quality Council for UK Higher Education) published a Statement of Intent on fairness in degree classification.

    It is short, clear and to the point: as was then the fashion in quality assurance circles. Right now we are concerned with paragraph b, which commits providers to protecting the value of their degrees by:

    reviewing and explaining how their process for calculating final classifications, fully reflect student attainment against learning criteria, protect the integrity of classification boundary conventions, and maintain comparability of qualifications in the sector and over time

    That’s pretty uncontroversial, as is the recommended implementation pathway in England: a published “degree outcomes statement” articulating the results of an internal institutional review.

    The idea was that these statements would show the kind of quantitative trends that OfS get interested in, some assurance that these institutional assessment processes meet the reference points, and reflect the expertise and experience of external examiners, and provide a clear and publicly accessible rationale for the degree algorithm. As Jim sets out elsewhere, in the main this has happened – though it hasn’t been an unqualified success.

    To be continued

    The release of this documentation prompts a number of questions, both on the specifics of what is being done and more widely on the way in which this approach does (or does not) constitute good regulatory practice.

    It is fair to ask, for instance, whether OfS has the power to decide that it has concerns about particular degree awarding practices, even where it is unable to point to evidence that these practices are currently having a significant impact on degrees awarded, and to promote a de facto change in interpretation of regulation that will discourage their use.

    Likewise, it seems problematic that OfS believes it has the power to declare that the three providers it investigated are at risk of breaching a condition of registration because they have an approach to awarding degrees that it has decided that it doesn’t like.

    It is concerning that these three providers have been announced as being at higher risk of a breach when other providers with similar practices have not. It is worth asking whether this outcome meets the criteria for transparent, accountable, proportionate, and consistent regulatory practice – and whether it represents action being targeted only at cases where it is demonstrably needed.

    More widely, the power to determine or limit the role and purpose of external examiners in upholding academic standards has not historically been one held by a regulator acting on behalf of the government. The external examiner system is a “sector recognised standard” (in the traditional sense) and generally commands the confidence of registered higher education providers. And it is clearly a matter of institutional autonomy – remember in HERA OfS needs to “have regard to” institutional autonomy over assessment, and it is difficult to square this intervention with that duty.

    And there is the worry about the value and impact of sector consultation – an issue picked up in the Industry and Regulators Committee review of OfS. Should a regulator really be initiating a “dialogue with the sector” when its preferences on the external examiner system are already so clearly stated? And it isn’t just the sector – a consultation needs to ensure that the the views of employers (and other stakeholders, including professional bodies) are reflected in whatever becomes the final decision.

    Much of this may become clear over time – there is surely more to follow in the wider overhaul of assurance, quality, and standards regulation that was heralded in the post-16 white paper. A full consultation will help centre the views of employers, course leaders, graduates, and professional bodies – and the parallel work on bringing the OfS quality functions back into alignment with international standards will clearly also have an impact.

    Source link

  • Algorithms aren’t the problem. It’s the classification system they support

    Algorithms aren’t the problem. It’s the classification system they support

    The Office for Students (OfS) has published its annual analysis of sector-level degree classifications over time, and alongside it a report on Bachelors’ degree classification algorithms.

    The former is of the style (and with the faults) we’ve seen before. The latter is the controversial bit, both to the extent to which parts of it represent a “new” set of regulatory requirements, and a “new” set of rules over what universities can and can’t do when calculating degree results.

    Elsewhere on the site my colleague David Kernohan tackles the regulation issue – the upshots of the “guidance” on the algorithms, including what it will expect universities to do both to algorithms in use now, and if a provider ever decides to revise them.

    Here I’m looking in detail at its judgements over two practices. Universities are, to all intents and purposes, being banned from any system which discounts credits with the lowest marks – a practice which the regulator says makes it difficult to demonstrate that awards reflect achievement.

    It’s also ruling out “best of” algorithm approaches – any universities that determine degree class by running multiple algorithms and selecting the one that gives the highest result will also have to cease doing so. Anyone still using these approaches by 31 July 2026 has to report itself to OfS.

    Powers and process do matter, as do questions as to whether this is new regulation, or merely a practical interpretation of existing rules. But here I’m concerned with the principle. Has OfS got a point? Do systems such as those described above amount to misleading people who look at degree results over what a student has achieved?

    More, not less

    A few months ago now on Radio 4’s More or Less, I was asked how Covid had impacted university students’ attainment. On a show driven by data, I was wary about admitting that as a whole, I think it would be fair to say that UK HE isn’t really sure.

    When in-person everything was cancelled back in 2020, universities scrambled to implement “no detriment” policies that promised students wouldn’t be disadvantaged by the disruption.

    Those policies took various forms – some guaranteed that classifications couldn’t fall below students’ pre-pandemic trajectory, others allowed students to select their best marks, and some excluded affected modules entirely.

    By 2021, more than a third of graduates were receiving first-class honours, compared to around 16 per cent a decade earlier – with ministers and OfS on the march over the risk of “baking in” the grade inflation.

    I found that pressure troubling at the time. It seemed to me that for a variety of reasons, providers may have, as a result of the pandemic, been confronting a range of faults with degree algorithms – for the students, courses and providers that we have now, it was the old algorithms that were the problem.

    But the other interesting thing for me was what those “safety net” policies revealed about the astonishing diversity of practice across the sector when it comes to working out the degree classification.

    For all of the comparison work done – including, in England, official metrics on the Access and Participation Dashboard over disparities in “good honours” awarding – I was wary about admitting to Radio 4’s listeners that it’s not just differences in teaching, assessment and curriculum that can drive someone getting a First here and a 2:2 up the road.

    When in-person teaching returned in 2022 and 2023, the question became what “returning to normal” actually meant. Many – under regulatory pressure not to “bake in” grade inflation – removed explicit no-detriment policies, and the proportion of firsts and upper seconds did ease slightly.

    But in many providers, many of the flexibilities introduced during Covid – around best-mark selection, module exclusions and borderline consideration – had made explicit and legitimate what was already implicit in many institutional frameworks. And many were kept.

    Now, in England, OfS is to all intents and purposes banning a couple of the key approaches that were deployed during Covid. For a sector that prizes its autonomy above almost everything else, that’ll trigger alarm.

    But a wider look at how universities actually calculate degree classifications reveals something – the current system embodies fundamentally different philosophies about what a degree represents, are philosophies that produce systematically different outcomes for identical student performance, and are philosophies that should not be written off lightly.

    What we found

    Building on David Allen’s exercise seven years ago, a couple of weeks ago I examined the publicly available degree classification regulations for more than 150 UK universities, trawling through academic handbooks, quality assurance documents and regulatory frameworks.

    The shock for the Radio 4 listener on the Clapham Omnibus would be that there is no standardised national system with minor variations, but there is a patchwork of fundamentally different approaches to calculating the same qualification.

    Almost every university claims to use the same framework for UG quals – the Quality Assurance Agency benchmarks, the Framework for Higher Education Qualifications and standard grade boundaries of 70 for a first, 60 for a 2:1, 50 for a 2:2 and 40 for a third. But underneath what looks like consistency there’s extraordinary diversity in how marks are then combined into final classifications.

    The variations cluster around a major divide. Some universities – predominantly but not exclusively in the Russell Group – operate on the principle that a degree classification should reflect the totality of your assessed work at higher levels. Every module (at least at Level 5 and 6) counts, every mark matters, and your classification is the weighted average of everything you did.

    Other universities – predominantly post-1992 institutions but with significant exceptions – take a different view. They appear to argue that a degree classification should represent your actual capability, demonstrated through your best work.

    Students encounter setbacks, personal difficulties and topics that don’t suit their strengths. Assessment should be about demonstrating competence, not punishing every misstep along a three-year journey.

    Neither philosophy is obviously wrong. The first prioritises consistency and comprehensiveness. The second prioritises fairness and recognition that learning isn’t linear. But they produce systematically different outcomes, and the current system does allow both to operate under the guise of a unified national framework.

    Five features that create flexibility

    Five structural features appear repeatedly across university algorithms, each pushing outcomes in one direction.

    1. Best-credit selection

    This first one has become widespread, particularly outside the Russell Group. Rather than using all module marks, many universities allow students to drop their worst performances.

    One uses the best 105 credits out of 120 at each of Levels 5 and 6. Another discards the lowest 20 credits automatically. A third takes only the best 90 credits at each level. Several others use the best 100 credits at each stage.

    The rationale is obvious – why should one difficult module or one difficult semester define an entire degree?

    But the consequence is equally obvious. A student who scores 75-75-75-75-55-55 across six modules averages 68.3 per cent. At universities where everything counts, that’s a 2:1. At universities using best-credit selection that drops the two 55s, it averages 75 – a clear first.

    Best-credit selection is the majority position among post-92s, but virtually absent at Russell Group universities. OfS is now pretty much banning this practice.

    The case against rests on B4.2(c) (academic regulations must be “designed to ensure” awards are credible) and B4.4(e) (credible means awards “reflect students’ knowledge and skills”). Discounting credits with lowest marks “excludes part of a student’s assessed achievement” and so:

    …may result in a student receiving a class of degree that overlooks material evidence of their performance against the full learning outcomes for the course.

    2. Multiple calculation routes

    These take that principle further. Several universities calculate your degree multiple ways and award whichever result is better. One runs two complete calculations – using only your best 100 credits at Level 6, or taking your best 100 at both levels with 20:80 weighting. You get whichever is higher.

    Another offers three complete routes – unweighted mean, weighted mean and a profile-based method. Students receive the highest classification any method produces.

    For those holding onto their “standards”, this sort of thing is mathematically guaranteed to inflate outcomes. You’re measuring the best possible interpretation of what students achieved, not what they achieved every time. As a result, comparison across institutions becomes meaningless. Again, this is now pretty much being banned.

    This time, the case against is that:

    …the classification awarded should not simply be the most favourable result, but the result that most accurately reflects the student’s level of achievement against the learning outcomes.

    3. Borderline uplift rules

    What happens on the cusps? Borderline uplift rules create all sorts of discretion around the theoretical boundaries.

    One university automatically uplifts students to the higher class if two-thirds of their final-stage credits fall within that band, even if their overall average sits below the threshold. Another operates a 0.5 percentage point automatic uplift zone. Several maintain 2.0 percentage point consideration zones where students can be promoted if profile criteria are met.

    If 10 per cent of students cluster around borderlines and half are uplifted, that’s a five per cent boost to top grades at each boundary – the cumulative effect is substantial.

    One small and specialist plays the counterfactual – when it gained degree-awarding powers, it explicitly removed all discretionary borderline uplift. The boundaries are fixed – and it argues this is more honest than trying to maintain discretion that inevitably becomes inconsistent.

    OfS could argue borderline uplift breaches B4.2(b)’s requirement that assessments be “reliable” – defined as requiring “consistency as between students.”

    When two students with 69.4% overall averages receive different classifications (one uplifted to First, one remaining 2:1) based on mark distribution patterns or examination board discretion, the system produces inconsistent outcomes for identical demonstrated performance.

    But OfS avoids this argument, likely because it would directly challenge decades of established discretion on borderlines – a core feature of the existing system. Eliminating all discretion would conflict with professional academic judgment practices that the sector considers fundamental, and OfS has chosen not to pick that fight.

    4. Exit acceleration

    Heavy final-year weighting amplifies improvement while minimising early difficulties. Where deployed, the near-universal pattern is now 25 to 30 per cent for Level 5 and 70 to 75 per cent for Level 6. Some institutions weight even more heavily, with year three counting for 60 per cent of the final mark.

    A student who averages 55 in year two and 72 in year three gets 67.2 overall with typical 30:70 weighting – a 2:1. A student who averages 72 in year two and 55 in year three gets 59.9 – just short of a 2:1.

    The magnitude of change is identical – it’s just that the direction differs. The system structurally rewards late bloomers and penalises any early starters who plateau.

    OfS could argue that 75 per cent final-year weighting breaches B4.2(a)’s requirement for “appropriately comprehensive” assessment. B4 Guidance 335M warns that assessment “focusing only on material taught at the end of a long course… is unlikely to provide a valid assessment of that course,” and heavy (though not exclusive) final-year emphasis arguably extends this principle – if the course’s subject matter is taught across three years, does minimizing assessment of two-thirds of that teaching constitute comprehensive evaluation?

    But OfS doesn’t make this argument either, likely because year weighting is explicit in published regulations, often driven by PSRB requirements, and represents settled institutional choices rather than recent innovations. Challenging it would mean questioning established pedagogical frameworks rather than targeting post-hoc changes that might mask grade inflation.

    5. First-year exclusion

    Finally, with a handful of institutional and PSRB exceptions, the first-year-not-counting is now pretty much universal, removing what used to be the bottom tail of performance distributions.

    While this is now so standard it seems natural, it represents a significant structural change from 20 to 30 years ago. You can score 40s across the board in first year and still graduate with a first if you score 70-plus in years two and three.

    Combine it with other features, and the interaction effects compound. At universities using best 105 credits at each of Levels 5 and 6 with 30:70 weighting, only 210 of 360 total credits – 58 per cent – actually contribute to your classification. And so on.

    OfS could argue first-year exclusion breaches comprehensiveness requirements – when combined with best-credit selection, only 210 of 360 total credits (58%) might count toward classification. But OfS explicitly notes this practice is now “pretty much universal” with only “a handful of institutional and PSRB exceptions,” treating it as neutral accepted practice rather than a compliance concern.

    Targeting something this deeply embedded across the sector would face overwhelming institutional autonomy defenses and would effectively require the sector to reinstate a practice it collectively abandoned over the past two decades.

    OfS’ strategy is to focus regulatory pressure on recent adoptions of “inherently inflationary” practices rather than challenging longstanding sector-wide norms.

    Institution type

    Russell Group universities generally operate on the totality-of-work philosophy. Research-intensives typically employ single calculation methods, count all credits and maintain narrow borderline zones.

    But there are exceptions. One I’ve seen has automatic borderline uplift that’s more generous than many post-92s. Another’s 2.0 percentage point borderline zone adds substantial flexibility. If anything, the pattern isn’t uniformity of rigour – it’s uniformity of philosophy.

    One London university has a marks-counting scheme rather than a weighted average – what some would say is the most “rigorous” system in England. And two others – you can guess who – don’t fit this analysis at all, with subject-specific systems and no university-wide algorithms.

    Post-1992s systematically deploy multiple flexibility features. Best-credit selection appears at roughly 70 per cent of post-92s. Multiple calculation routes appear at around 40 per cent of post-92s versus virtually zero per cent at research-intensive institutions. Several post-92s have introduced new, more flexible classification algorithms in the past five years, while Russell Group frameworks have been substantially stable for a decade or more.

    This difference reflects real pressures. Post-92s face acute scrutiny on student outcomes from league tables, OfS monitoring and recruitment competition, and disproportionately serve students from disadvantaged backgrounds with lower prior attainment.

    From one perspective, flexibility is a cynical response to metrics pressure. From another, it’s recognition that their students face different challenges. Both perspectives contain truth.

    Meanwhile, Scottish universities present a different model entirely, using GPA-based calculations across SCQF Levels 9 and 10 within four-year degree structures.

    The Scottish system is more internally standardised than the English system, but the two are fundamentally incompatible. As OfS attempts to mandate English standardisation, Scottish universities will surely refuse, citing devolved education powers.

    London is a city with maximum algorithmic diversity within minimum geographic distance. Major London universities use radically different calculation systems despite competing for similar students. A student with identical marks might receive a 2:1 at one, a first at another and a first with higher average at a third, purely over algorithmic differences.

    What the algorithm can’t tell you

    The “five features” capture most of the systematic variation between institutional algorithms. But they’re not the whole story.

    First, they measure the mechanics of aggregation, not the standards of marking. A 65 per cent essay at one university may represent genuinely different work from a 65 per cent at another. External examining is meant to moderate this, but the system depends heavily on trust and professional judgment. Algorithmic variation compounds whatever underlying marking variation exists – but marking standards themselves remain largely opaque.

    Second, several important rules fall outside the five-feature framework but still create significant variation. Compensation and condonement rules – how universities handle failed modules – differ substantially. Some allow up to 30 credits of condoned failure while still classifying for honours. Others exclude students from honours classification with any substantial failure, regardless of their other marks.

    Compulsory module rules also cut across the best-credit philosophy. Many universities mandate that dissertations or major projects must count toward classification even if they’re not among a student’s best marks. Others allow them to be dropped. A student who performs poorly on their dissertation but excellently elsewhere will face radically different outcomes depending on these rules.

    In a world where huge numbers of students now have radically less module choice than they did just a few years ago as a result of cuts, they would have reason to feel doubly aggrieved if modules they never wanted to take in the first place will now count when they didn’t last week.

    Several universities use explicit credit-volume requirements at each classification threshold. A student might need not just a 60 per cent average for a 2:1, but also at least 180 credits at 60 per cent or above, including specific volumes from the final year. This builds dual criteria into the system – you need both the average and the profile. It’s philosophically distinct from borderline uplift, which operates after the primary calculation.

    And finally, treatment of reassessed work varies. Nearly all universities cap resit marks at the pass threshold, but some exclude capped marks from “best credit” calculations while others include them. For students who fail and recover, this determines whether they can still achieve high classifications or are effectively capped at lower bands regardless of their other performance.

    The point isn’t so much that I (or OfS) have missed the “real” drivers of variation – the five features genuinely are the major structural mechanisms. But the system’s complexity runs deeper than any five-point list can capture. When we layer compensation rules onto best-credit selection, compulsory modules onto multiple calculation routes, and volume requirements onto borderline uplift, the number of possible institutional configurations runs into the thousands.

    The transparency problem

    Every day’s a school day at Wonkhe, but what has been striking for me is quite how difficult the information has been to access and compare. Some institutions publish comprehensive regulations as dense PDF documents. Others use modular web-based regulations across multiple pages. Some bury details in programme specifications. Several have no easily locatable public explanation at all.

    UUK’s position on this, I’d suggest, is a something of a stretch:

    University policies are now much more transparent to students. Universities are explaining how they calculate the classification of awards, what the different degree classifications mean and how external examiners ensure consistency between institutions.

    Publication cycles vary unpredictably, cohort applicability is often ambiguous, and cross-referencing between regulations, programme specifications and external requirements adds layers upon layers of complexity. The result is that meaningful comparison is effectively impossible for anyone outside the quality assurance sector.

    This opacity matters because it masks that non-comparability problem. When an employer sees “2:1, BA in History” on a CV, they have no way of knowing whether this candidate’s university used all marks or selected the best 100 credits, whether multiple calculation routes were available or how heavily final-year work was weighted. The classification looks identical regardless. That makes it more, not less, likely that they’ll just go on prejudices and league tables – regardless of the TEF medal.

    We can estimate the impact conservatively. Year one exclusion removes perhaps 10 to 15 per cent of the performance distribution. Best-credit selection removes another five to 10 per cent. Heavy final-year weighting amplifies improvement trajectories. Multiple calculation routes guarantee some students shift up a boundary. Borderline rules uplift perhaps three to five per cent of the cohort at each threshold.

    Stack these together and you could shift perhaps 15 to 25 per cent of students up one classification band compared to a system that counted everything equally with single-method calculation and no borderline flexibility. Degree classifications are measuring as much about institutional algorithm choices as about student learning or teaching quality.

    Yes, but

    When universities defend these features, the justifications are individually compelling. Best-credit selection rewards students’ strongest work rather than penalising every difficult moment. Multiple routes remove arbitrary disadvantage. Borderline uplift reflects that the difference between 69.4 and 69.6 per cent is statistically meaningless. Final-year emphasis recognises that learning develops over time. First-year exclusion creates space for genuine learning without constant pressure.

    None of these arguments is obviously wrong. Each reflects defensible beliefs about what education is for. The problem is that they’re not universal beliefs, and the current system allows multiple philosophies to coexist under a facade of equivalence.

    Post-92s add an equity dimension – their flexibility helps students from disadvantaged backgrounds who face greater obstacles. If standardisation forces them to adopt strict algorithms, degree outcomes will decline at institutions serving the most disadvantaged students. But did students really learn less, or attain to a “lower” standard?

    The counterargument is that if the algorithm itself makes classifications structurally easier to achieve, you haven’t promoted equity – you’ve devalued the qualification. And without the sort of smart, skills and competencies based transcripts that most of our pass/fail cousins across Europe adopt, UK students end up choosing between a rock and a hard place – if only they were conscious of that choice.

    The other thing that strikes me is that the arguments I made in December 2020 for “baking in” grade inflation haven’t gone away just because the pandemic has. If anything, the case for flexibility has strengthened as the cost of living crisis, inadequate maintenance support and deteriorating student mental health create circumstances that affect performance through no fault of students’ own.

    Students are working longer hours in paid employment to afford rent and food, living in unsuitable accommodation, caring for family members, and managing mental health conditions at record levels. The universities that retained pandemic-era flexibilities – best-credit selection, generous borderline rules, multiple calculation routes – aren’t being cynical about grade inflation. They’re recognising that their students disproportionately face these obstacles, and that a “totality-of-work” philosophy systematically penalises students for circumstances beyond their control rather than assessing what they’re actually capable of achieving.

    The philosophical question remains – should a degree classification reflect every difficult moment across three years, or should it represent genuine capability demonstrated when circumstances allow? Universities serving disadvantaged students have answered that question one way – research-intensive universities serving advantaged students have answered it another.

    OfS’s intervention threatens to impose the latter philosophy sector-wide, eliminating the flexibility that helps students from disadvantaged backgrounds show their “best selves” rather than punishing them for structural inequalities that affect their week-to-week performance.

    Now what

    As such, a regulator seeking to intervene faces an interesting challenge with no obviously good options – albeit one of its own making. Another approach might have been to cap the most egregious practices – prohibit triple-route calculations, limit best-credit selection to 90 per cent of total credits, cap borderline zones at 1.5 percentage points.

    That would eliminate the worst outliers while preserving meaningful autonomy. The sector would likely comply minimally while claiming victory, but oodles of variation would remain.

    A stricter approach would be mandating identical algorithms – but would provoke rebellion. Devolved nations would refuse, citing devolved powers and triggering a constitutional comparison. Research intensive universities would mount legal challenges on academic freedom grounds, if they’re not preparing to do so already. Post-92s would deploy equity arguments, claiming standardisation harms universities serving disadvantaged students.

    A politically savvy but inadequate approach might have been mandatory transparency rather than prescription. Requiring universities to publish algorithms in standardised format with some underpinning philosophy would help. That might preserve autonomy while creating a bit of accountability. Maybe competitive pressure and reputational risk will drive voluntary convergence.

    But universities will resist even being forced to quantify and publicise the effects of their grading systems. They’ll argue it undermines confidence and damages the UK’s international reputation.

    Given the diversity of courses, providers, students and PSRBs, algorithms also feel like a weird thing to standardise. I can make a much better case for a defined set of subject awards, a shared governance framework (including subject benchmark statements, related PSRBs and degree algorithms) than I can for tightening standardisation in isolation.

    The fundamental problem is that the UK degree classification system was designed for a different age, a different sector and a different set of students. It was probably a fiction to imagine that sorting everyone into First, 2:1, 2:2 and Third was possible even 40 years ago – but today, it’s such obvious nonsense that without richer transcripts, it just becomes another way to drag down the reputation of the sector and its students.

    Unfit for purpose

    In 2007, the Burgess Review – commissioned by Universities UK itself – recommended replacing honours degree classifications with detailed achievement transcripts.

    Burgess identified the exact problems we have today – considerable variation in institutional algorithms, the unreliability of classification as an indicator of achievement, and the fundamental inadequacy of trying to capture three years of diverse learning in a single grade.

    The sector chose not to implement Burgess’s recommendations, concerned that moving away from classifications would disadvantage UK graduates in labour markets “where the classification system is well understood.”

    Eighteen years later, the classification system is neither well understood nor meaningful. A 2:1 at one institution isn’t comparable to a 2:1 at another, but the system’s facade of equivalence persists.

    The sector chose legibility and inertia over accuracy and ended up with neither – sticking with a system that protected institutional diversity while robbing students of the ability to show off theirs. As we see over and over again, a failure to fix the roof when the sun was shining means reform may now arrive externally imposed.

    Now the regulator is knocking on the conformity door, there’s an easy response. OfS can’t take an annual pop at grade inflation if most of the sector abandons the outdated and inadequate degree classification system. Nothing in the rules seems to mandate it, some UG quals don’t use it (think regulated professional bachelors), and who knows where the White Paper’s demand for meaningful exit awards at Level 4 and 5 fit into all of this.

    Maybe we shouldn’t be surprised that a regulator that oversees a meaningless and opaque medal system with a complex algorithm that somehow boils an entire university down to “Bronze”, “Silver” Gold” or “Requires Improvement” is keen to keep hold of the equivalent for students.

    But killing off the dated relic would send a really powerful signal – that the sector is committed to developing the whole student, explaining their skills and attributes and what’s good about them – rather than pretending that the classification makes the holder of a 2:1 “better” than those with a Third, and “worse” than those with a First.

    Source link

  • The white paper on regulation

    The white paper on regulation

    The Office for Students is a creation of the 2017 Higher Education and Research Act, but this legislation was not the last word on the matter.

    It has gained new powers and new responsibilities over the years, and – looking closely at the white paper – it looks set to expand its powers, capabilities, and capacity even further.

    As the Department for Education, and as ministers and politicians more generally, make new demands of the regulator it needs to be given the power to meet these demands. And this generally needs to happen via the amendment of HERA, which almost always requires further primary legislation.

    It is clear that much of the change that ministers expect to see in the higher education sector – as set out via the white paper – needs to happen via the action of the regulator.

    Regulation, rebooted

    The 2022 Skills and Post-16 Education Act gave OfS explicit powers to assess the quality of higher education with reference to student outcomes, protection from defamation claims based on regulatory decisions, and the duty to publish details of investigations.

    The 2023 Higher Education (Freedom of Speech) Act, alongside the various measures linked directly to freedom of speech and academic freedom, attempted to grant OfS the power to monitor overseas funding – this was, in the end, not enacted.

    These decisions to give OfS new powers and new duties will have been influenced by legal embarrassment (the student outcomes and defamation issues) and perceived threats (such as on overseas funding or freedom of speech), but measures are generally finessed in conversation with the regulator and its own assessment of powers that it needs.

    It is fair to assume that OfS – famously careful around legal risk – would largely prefer to have more powers rather than less. The diversity of the sector, and the range of political and public concern about what providers actually get up to, mean that the regulator may often feel pressured to act in ways that it is not, technically, permitted to. This is not a risk limited to OfS – witness the Department for Education’s legal travails regarding Oxford Business College.

    Aspects requiring primary legislation

    The white paper offers the Office for Students a number of new powers to do things which – on the face of it – it can already do and has already done. What we need to keep an eye on here is where the amping up of these existing powers happens in a way that overrides safeguards that exist to prevent arbitrary and unfair regulatory action. It is already troubling that, unlike pretty much anyone else, the Office for Students is legally unable to defame a provider (for example by posting details of an investigation including details that are later shown to be false).

    Quality

    The Department for Education seems to labour under the misconception that OfS cannot restrict a provider’s ability to recruit on the basis of “poor quality”. It can – and has done so four times since the regulator was established. Nonetheless, the government will legislate “when parliamentary time allows” to give OfS these powers again using slightly different words – and probably modifying sections 5 and 6 of HERA to allow it to do so (currently, the Secretary of State cannot give OfS guidance that relates to the recruitment and admission of students).

    This would be part of a wider portfolio of new powers for OfS, allowing it to intervene decisively to tackle poor quality provision (including within franchise arrangements), prevent the abuse of public money at registered providers, and safeguard against provision with poor outcomes for students).

    Again – these are powers, in the broadest sense, the OfS already has. It has already intervene to tackle low quality provision (including poor quality outcomes for students) via the B3 and other B condition investigations and linked regulatory rulings. And it has already intervened on franchise arrangements (most recently opening an investigation into the arrangement between Bath Spa University and the Fairfield School of Business).

    There will be a strengthening of powers to close down provision where fraud or the misuse of public funds is identified – and here it is fair to read across to concerns about franchise provision and the work of (“unscrupulous”) UK recruitment agents. Condition E8 – which specifically addresses the wider issues of fraud and misuse of public funds, currently applies only to new registrants: it is fair to ask why extending this to currently registered providers is not under consideration as a non-legislative approach. Clearly the infamous powers of entry and search (HERA section 61) and the power to require information from unregistered providers (HERA section 62) are not cutting it.

    Linked to these, OfS will be able to build capacity to carry out more investigations and to do so at greater speed – for which in the first part we should read that OfS will get more money from DfE. It already gets roughly £10m each year, which covers things like running TEF and administering the freedom of speech complaints scheme – this is on top of around £32m in registration fees from the sector (also public money) which sounds like a lot but doesn’t even cover staff costs at OfS. We are awaiting a consultation on OfS registration fees for providers for the future, so it is possible this situation may change.

    OfS’ proposed new quality regime is centred around TEF, a “section 25” scheme in the language of HERA. Schedule 2, section 2, of HERA is clear that a section 25 scheme can be used to vary the fee cap for individual providers. Indeed, it is currently used to vary the cap – if you don’t have a TEF award (at all) you can only charge a maximum of £9,275 next year. So no fancy legislative changes would be required to make fee uplifts conditional on a “higher quality threshold” if you happened to believe that a provider’s income per student should be determined by outcomes data from a decade ago.

    Not strictly speaking “quality”, but OfS will also get stronger regulatory power to take robust action against providers that breach their duties under the Higher Education (Freedom of Speech) Act – beyond even fining providers (as it has recently done to the University of Sussex) and deregistering (or adding a condition of registration via conditions E1 and E2), a power it has had since HERA was passed. I’m not sure what would constitute more robust action than that.

    Access and participation

    The access and participation plan (APP) regime is a remnant of the work of the former Office for Fair Access (OFFA). The Higher Education Act 2004 gave this body the ability to call for and assess “access agreements”, with the approval of OFFA needed for a provider to charge higher fees. Section 29 of HERA gave the impression that handed these powers directly over to the Office for Students – but in actuality it gave a lot more direct power to the Secretary of State to specify the content of plans and the way they are assessed via regulations.

    The proposals in the white paper look for a risk-based approach to APP, but at provider level – not the more general risks associated with particular groups of students that we find in the OfS’ current approach. Providers that do well at access and participation will benefit from streamlined regulation, for those that do not the experience may involve a little more pain.

    The big change is that access and participation will now look in a lot more detail at postgraduate provision and the postgraduate student experience. And section 32(5)(b) of HERA specifically prohibits plans from addressing “education provided by means of any postgraduate course other than a course of initial teacher training”. So we could expect some kind of legislative action (it may be possible to do via regulations but if there is a bill coming then why not?) to address this issue. And besides that, there will be a load of regulations and guidance from the Secretary of State setting out what she would like John Blake or his successor to do.

    Aspects requiring changes to the regulatory framework

    Registration

    In what is fast becoming a more closely coupled tertiary sector, OfS is set to become a primary regulator for every provider of higher education. There are three sets of providers that will be affected by this move:

    • Further education colleges (FECs) delivering higher education (courses at levels 4 and above)
    • Other providers delivering provision currently funded via Advanced Learner Loans (ALL)
    • Other providers designated for student loan support, including those delivering courses via franchise and partnership arrangements.

    In each of these cases, provision that is to all intents and purposes higher education is currently delivered without the direct oversight of the higher education regulator. This may be delivered with the oversight and approval of a higher education provider (franchise and partnership provision), or with the oversight of Ofqual (there are hundreds of these).

    The regulation of this kind of provision within FECs is probably best understood – as things stand all of the fundamental regulation of these bodies (stuff like governance and financial planning) happens via the Department for Education, which took on this role from the Education and Skills Funding Agency when it was abolished on 31 March 2025. The Department then provides assurance to the Office for Students and data to HESA.

    Designation for student support nominally happens via a decision made by the Secretary of State (section 84 of HERA) – in practice this happens by default for anyone delivering higher education. As we saw in the Oxford Business College case, arrangements like this are predicated on the assumption that what we might call regulation (quality and standards, and also – I guess – fit and proper person type stuff) is pushed onto the validating organisation with varying degrees of confidence

    Advanced Learner Loan (ALL) funded provision, confusingly, is technically further education (level 3 and up) but the logic of the machinery of the Lifelong Learning Entitlement wants to bring the upper end of this provision into the ambit of OfS. There was initially supposed to be a separate category of registration for ALL provision with OfS, this plan has been scrapped.

    We’ve known informally that it was unlikely to happen for some time, but it was actually the white paper that put the nail in the coffin. OfS will be consulting, this autumn, on the disapplication of certain conditions of registration for providers in the further education sector – though this shift will be a slow process, with current ALL arrangements extending through to 2030. But this consultation is very likely to extend much wider – recall that OfS is also tasked with a more robust approach to market entry (which, again, would be done via registration).

    Likewise, OfS has been tasked with toughening up the (E) conditions on governance, and the (D) conditions on financial sustainability (which would include forming a system-wide view of sector resilience working with UKRI) – we’ve seen evidence of a rethought approach to governance in the new conditions (E7 and E9) for initial registration, and have suspected that a further consultation would apply this to more providers.

    Degree awarding powers

    The ability to award your own qualifications is an important reputational stepping stone for any provider entering the higher education sector. It has an impact on the ability to design and run new courses, and also brings a financial benefit – no need to pay capitation on fee income to your academic partners. While quality and standards play a role in OfS registration decisions, these two aspects of provision are central to assessment for degree awarding powers as expressed via:

    An emerging self-critical, cohesive academic community with a clear commitment to the assurance of standards supported by effective (in prospect) quality systems.

    The current system (as of 1 April 2023) is run by the Office for Students after the decision of the QAA to demit from the role of Designated Quality Body. There are aspects that deal with student protection, financial probity, and arrangements for progression dealt with as a precursor to a full assessment – and here OfS looks for evidence that courses have been developed and approved in accordance with sector recognised standards: currently copy-pasted from the QAA’s (2014) framework for higher education qualifications and the UKSCQA degree classification descriptions (2019).

    When this arrangement was set up back in 2022 it was somewhat controversial. There was no sign of the sector recognised standard that is the QAA Quality Code, and seemingly no mechanism to update the official list of standards recognised by the sector as they are restated elsewhere. There is a mention of sector recognised standards in HERA, but these need to be determined by “persons representing a broad range of registered higher education providers” and “command the confidence of registered higher education providers”.

    External examiners are not mentioned in the sector recognised standards (despite being a standard that is recognised by the sector), but are mentioned in DAPs criterion B3k on the quality of the academic experience, in C1g on allowing academics to be external examiners elsewhere to gain experience (which F1i clarifies should be a third of academic staff where research degrees are offered). If you are applying for full DAPs you need to send OfS a sample of external examiner reports.

    In the white paper it is suggested that government is not convinced of the value of external examination – here’s the charge sheet:

    • We will consider the extent to which recent patterns of improving grades can be explained by an erosion of standards, rather than improved teaching or assessment practices
    • We will also continue to build the evidence base on the effectiveness or otherwise of the external examining system, which we will feed into the Office for Students’ programme for reform
    • We will also seek employers’ views about whether the academic system is giving graduates the skills and knowledge they need for the workplace.

    Of course, this sails worryingly close to devolved issues, as the external examiner infrastructure extends far beyond England: it is a requirement, supported by the sector, in Wales, Scotland, and Northern Ireland. External examiners do not often have any input into awarded degree classifications (that’s degree algorithms that are set internally by providers) so are not particularly likely to be a determining factor in more people getting a first.

    Indeed, the sector came together (back in 2022) to publish a set of External Examining Principles which exist as an annex to the statement of intent on degree classifications that forms a part of the OfS’s “sector-recognised standards.” It’s a good read for anyone who does not have a full understanding of the role of external examiners, both within institutional processes and those of the many professional, statutory, and regulatory bodies (PSRBs).

    This isn’t yet at the point of a consultation, just work the Office for Students is doing to update the process – a body of work that will also establish the concept and process of granting Higher Technical Qualification awarding powers. But we should watch some of the language around the next release of the OfS’ monitoring work on grade inflation – especially as the 2022 insight brief highlighted UUK work to strengthen the external examiner system as a key tool to address the issue.

    Other new responsibilities

    White papers generally try to make changes to the provision of applicant information – we have the 2004 white paper to thank for what is now known as Discover Uni, and the 2015 white paper put forward a simple precious metals based system that we have come to love as the Teaching Excellence Framework. Thankfully this time round it is a matter of incorporating Discover Uni style data onto UCAS course pages (which, and honestly I’m sorry to keep doing this) you can already find in a “student outcomes” section operated by the Office for Students. The white paper asks for continuation data to be added to this widget – I imagine not a huge piece of work.

    It’s 2025, so every document has to mention what is popularly known as “artificial intelligence” and we more accurately describe as generative large language models. A few paragraphs tacked on to the end of the white paper ask OfS to assess the impact of such tools on assessments and qualifications – adding, almost in passing, that it expects that “all students will learn about artificial intelligence as part of their higher education experience”. In direct, but light-hearted I am sure, contravention of section 8 (a)(i) of HERA, which says that providers are free to determine the content of courses.

    Which brings us to Progress 8 – a measure used in schools policy which adds together pupils’ highest scores from eight (hence the name I suppose) GCSEs that the government thinks are important (English and maths, plus “English Baccalaureate” subjects like: sciences, history, geography, languages) and produces a cohort average used to compare schools (here it is called “Attainment 8”) and compare average performance pupils in a given school cohort with how they did in simpler subjects at primary schools as a kind of value added measure (“Progress 8”). In other words, DfE looking in the white paper to work with OfS to build Progress 8 but for higher education is another stab at learning gain measures – something we’ve been investigating since the days of HEFCE and have never been shown to work on a national scale.

    Trust and confidence

    Regulation works via the consent of the regulated. Everyone from Universities UK down has been at pains to point out that they do see the value of higher education regulation, even if it was expressed in kind of a “more in sorrow than in anger” way at the primal therapy that was the House of Lords Industry and Regulator Committee.

    But this agreement over value is determined by a perception that the actions of the regulator are fair, predictable, and proportionate. These qualities can be seen by inexperienced regulators as a block to speedy and decisive action, but the work OfS has done to reset what was initially a very fractious relationship with the sector (and associated bodies) suggests that the importance of consensual regulation is fully understood on Lime Kiln Close.

    Every time the OfS gets, or asks for, new powers it affects the calculus of value to the sector. Here it is less a matter of new powers and more an issue of strengthening and extending existing powers (despite the occasionally confused language of the white paper). Everyone involved is surely aware that a strong power is a power that is perceived as fair – and is challengeable when it appears to be unfair. The occasional lawsuits OfS (and DfE) have faced have happened when someone is keen to do the right thing but has not gone about it in the right way.

    The coming consultations – ahead of legislation and changes to the framework – need to be genuine listening exercises, even if this means adding the kind of nuance that slows things down, or reflecting on the powers OfS already has and looking for ways to improve their effective use.

    Source link

  • Can regulation cope with a unified tertiary system in Wales?

    Can regulation cope with a unified tertiary system in Wales?

    Medr’s second consultation on its regulatory framework reminds us both of the comparatively small size of the Welsh tertiary sector, and the sheer ambition – and complexity – of bringing FE, HE, apprenticeships and ACL under one roof.

    Back in May, Medr (the official name for the Commission for Tertiary Education and Research in Wales) launched its first consultation on the new regulatory system required by the Tertiary Education and Research Wales Act 2022.

    At that stage the sector’s message was that it was too prescriptive, too burdensome, and insufficiently clear about what was mandatory versus advisory.

    Now, five months later, Medr has returned with a second consultation that it says addresses those concerns. The documents – running to well over 100 pages across the main consultation text and six annexes – set out pretty much the complete regulatory framework that will govern tertiary education in Wales from August 2026.

    It’s much more than a minor technical exercise – it’s the most ambitious attempt to create a unified regulatory system across further education, higher education, apprenticeships, adult community learning and maintained school sixth forms that the UK has yet seen.

    As well as that, it’s trying to be both a funder and a regulator; to be responsive to providers while putting students at the centre; and to avoid some of the mistakes that it has seen the Office for Students (OfS) make in England.

    Listening and responding

    If nothing else, it’s refreshing to see a sector body listening to consultation responses. Respondents wanted clearer signposts about what constitutes a compliance requirement versus advisory guidance, and worried about cumulative burden when several conditions and processes come together.

    They also asked for alignment with existing quality regimes from Estyn and the Quality Assurance Agency, and flagged concerns about whether certain oversight might risk universities’ status as non-profit institutions serving households (NPISH) – a technical thing, but one with significant implications for institutional autonomy.

    Medr’s response has been to restructure the conditions more clearly. Each now distinguishes between the condition itself (what must be met), compliance requirements that evidence the condition, and guidance (which providers must consider but may approach differently if they can justify that choice).

    It has also adopted a “make once, use many” approach to information, promising to rely on evidence already provided to Estyn, QAA or other bodies wherever it fits their purpose. And it has aligned annual planning and assurance points with sector cycles “wherever possible.”

    The question, of course, is whether this constitutes genuine simplification or merely better-organised complexity. Medr is establishing conditions of registration for higher education providers (replacing Fee and Access Plans), conditions of funding for FE colleges and others, and creating a unified quality framework and learner engagement code that applies across all tertiary education.

    The conditions themselves

    Some conditions apply universally. Others apply only to registered providers, or only to funded providers, or only to specific types of provision. As we’ve seen in England, the framework includes initial and ongoing conditions of registration for higher education providers (in both the “core” and “alternative” categories), plus conditions of funding that apply more broadly.

    Financial sustainability requires providers to have “strategies in place to ensure that they are financially sustainable” – which means remaining viable in the short term (one to two years), sustainable over the medium term (three to five years), and maintaining sufficient resources to honour commitments to learners. The supplementary detail includes a financial commitments threshold mechanism based on EBITDA ratios.

    Providers exceeding certain multiples will need to request review of governance by Medr before entering new financial commitments. That’s standard regulatory practice – OfS has equivalent arrangements in England – but it represents new formal oversight for Welsh institutions.

    Critically, Medr says its role is “to review and form an opinion on the robustness of governance over proposed new commitments, not to authorise or veto a decision that belongs to your governing body.” That’s some careful wording – but whether it will prove sufficient in practice (both in detail and in timeliness) when providers are required to seek approval before major financial decisions remains to be seen.

    Governance and management is where the sector seems to have secured some wins. The language around financial commitments has been softened from “approval” to “review.” The condition now focuses on outcomes – “integrity, transparency, strong internal control, effective assurance, and a culture that allows challenge and learning” – rather than prescribing structures.

    And for those worried about burden, registered higher education providers will no longer be required to provide governing body composition, annual returns of serious incidents, individual internal audit reports, or several other elements currently required under Fee and Access Plans. That is a reduction – but won’t make a lot of difference to anyone other than the person stiffed with gathering the sheaf of stuff to send in.

    Quality draws on the Quality Framework (Annex C) and requires providers to demonstrate their provision is of good quality and that they engage with continuous improvement. The minimum compliance requirements, evidenced through annual assurance returns, include compliance with the Learner Engagement Code, using learner survey outcomes in quality assurance, governing body oversight of quality strategies, regular self-evaluation, active engagement in external quality assessment (Estyn inspection and/or QAA review), continuous improvement planning, and a professional learning and development strategy.

    The framework promises that Medr will “use information from existing reviews and inspections, such as by Estyn and QAA” and “aim not to duplicate existing quality processes.” Notably, Medr has punted the consultation on performance indicators to 2027, so providers won’t know what quantitative measures they’ll be assessed against until the system is already live.

    Staff and learner welfare sets out requirements for effective arrangements to support and promote welfare, encompassing both “wellbeing” (emotional wellbeing and mental health) and “safety” (freedom from harassment, misconduct, violence including sexual violence, and hate crime). Providers will have to conduct an annual welfare self-evaluation and submit an annual welfare action plan to Medr. This represents new formal reporting – even if the underlying activity isn’t new.

    The Welsh language condition requires providers to take “all reasonable steps” to promote greater use of Welsh, increase demand for Welsh-medium provision, and (where appropriate) encourage research and innovation activities supporting the Welsh language. Providers must publish a Welsh Language Strategy setting out how they’ll achieve it, with measurable outcomes over a five-year rolling period with annual milestones. For providers subject to Welsh Language Standards under the Welsh Language (Wales) Measure 2011, compliance with those standards provides baseline assurance. Others must work with the Welsh Language Commissioner through the Cynnig Cymraeg.

    Learner protection plans will be required when Medr gives notice – typically triggered by reportable events, course closures, campus closures, or significant changes to provision. The guidance (in the supplementary detail from page 86 onwards) is clear about what does and doesn’t require a plan. Portfolio review and planned teach-out? Generally fine, provided learners are supported. Closing a course mid-year with no teach-out option? Plan required. Whether this offers the sort of protection that students need – especially when changes are made to courses to reduce costs – will doubtless come up in the consultation.

    And then there’s the Learner Engagement Code, set out in Annex D. This is where student representative bodies may feel especially disappointed. The Code is principles-based rather than rights-based, setting out nine principles (embedded, valued, understood, inclusive, bilingual, individual and collective, impactful, resourced, evaluated) – but creates no specific entitlements or rights for students or students’ unions.

    The principles themselves are worthy enough – learners should have opportunities to engage in decision-making, they should be listened to, routes for engagement should be clear, opportunities should reflect diverse needs, learners can engage through Welsh, collective voice should be supported, engagement should lead to visible impact, it should be resourced, and it should be evaluated. But it does all feel a bit vague.

    Providers will have to submit annual assurance that they comply with the Code, accompanied by evidence such as “analysis of feedback from learners on their experience of engagement” and “examples of decisions made as a result of learner feedback.” But the bar for compliance appears relatively low. As long as providers can show they’re doing something in each area, they’re likely to be deemed compliant. For SUs hoping for statutory backing for their role and resources, this will feel like a missed opportunity.

    Equality of opportunity is more substantial. The condition requires providers to deliver measurable outcomes across participation, retention, academic success, progression, and (where appropriate) participation in postgraduate study and research. The supplementary detail (from page 105) sets out that providers must conduct ongoing self-evaluation to identify barriers to equality of opportunity, then develop measurable outcomes over a five-year rolling period with annual milestones.

    Interestingly, there’s a transition period – in 2026-27, HE providers with Fee and Access Plans need only provide a statement confirming continued commitments. Full compliance – including submission of measurable outcomes – isn’t required until 2027-28, with the first progress reports due in 2028-29. That’s a sensible approach given the sector’s starting points vary considerably, but it does mean the condition won’t bite with full force for three years.

    Monitoring and intervention

    At the core of the monitoring approach is an Annual Assurance Return – where the provider’s governing body self-declares compliance across all applicable conditions, supported by evidence. This is supplemented by learner surveys, Estyn/QAA reviews, public information monitoring, complaints monitoring, reportable events, data monitoring, independent assurance, engagement activities and self-evaluation.

    The reportable events process distinguishes between serious incidents (to be reported within 10 working days) and notifiable events (reported monthly or at specified intervals). There’s 17 categories of serious incidents, from loss of degree awarding powers to safeguarding failures to financial irregularities over £50,000 or two per cent of turnover (whichever is lower). A table lists notifiable events including senior staff appointments and departures, changes to validation arrangements, and delays to financial returns. It’s a consolidation of existing requirements rather than wholesale innovation, but it’s now formalised across the tertiary sector rather than just HE.

    Medr’s Statement of Intervention Powers (Annex A) sets out escalation from low-level intervention (advice and assistance, reviews) through mid-level intervention (specific registration conditions, enhanced monitoring) to serious “directive” intervention (formal directions) and ultimately de-registration. The document includes helpful flowcharts showing the process for each intervention type, complete with timescales and decision review mechanisms. Providers can also apply for a review by an independent Decision Reviewer appointed by Welsh Ministers – a safeguard that universities dream of in England.

    Also refreshingly, Medr commits to operating “to practical turnaround times” when reviewing financial commitments, with the process “progressing in tandem with your own processes.” A six-week timeline is suggested for complex financing options – although whether this proves workable in practice will depend on Medr’s capacity and responsiveness.

    Quality

    The Quality Framework (Annex C) deserves separate attention because it’s genuinely attempting something ambitious – a coherent approach to quality across FE, HE, apprenticeships, ACL and sixth forms that recognises existing inspection/review arrangements rather than duplicating them.

    The framework has seven “pillars” – learner engagement, learner voice, engagement of the governing body, self-evaluation, externality, continuous improvement and professional learning and development. Each pillar sets out what Medr will do and what providers must demonstrate. Providers will be judged compliant if they achieve “satisfactory external quality assessment outcomes,” have “acceptable performance data,” and are not considered by Medr to demonstrate “a risk to the quality of education.”

    The promise is that:

    …Medr will work with providers and with bodies carrying out external quality assessment to ensure that such assessment is robust, evidence-based, proportionate and timely; adds value for providers and has impact in driving improvement.

    In other words, Estyn inspections and QAA reviews should suffice, with Medr using those outcomes rather than conducting its own assessments. But there’s a caveat:

    “Medr has asked Estyn and QAA to consider opportunities for greater alignment between current external quality assessment methodologies, and in particular whether there could be simplification for providers who are subject to multiple assessments.

    So is the coordination real or aspirational? The answer appears to be somewhere in between. The framework acknowledges that by 2027, Medr expects to have reviewed data collection arrangements and consulted on performance indicators and use of benchmarking and thresholds. Until that consultation happens, it’s not entirely clear what “acceptable performance data” means beyond existing Estyn/QAA judgements. And the promise of “greater alignment” between inspection methodologies is a promise, not a done deal.

    A tight timeline

    The key dates bear noting because they’re tight:

    • April 2026: Applications to the register open
    • August 2026: Register launches; most conditions come into effect
    • August 2027: Remaining conditions (Equality of Opportunity and Fee Limits for registered providers) come into full effect; apprenticeship providers fully subject to conditions of funding

    After all these years, we seem to be looking at some exit acceleration. It gives providers approximately six months from the consultation closing (17 December 2025) to the application process opening. Final versions of the conditions and guidance presumably need to be published early 2026 to allow preparation time. And all of this is happening against the backdrop of Senedd elections in 2026 – where polls suggest that some strategic guidance could be dropped on the new body fairly sharpish.

    And some elements remain unresolved or punted forward. The performance indicators consultation promised for 2027 means providers won’t know the quantitative measures against which they’ll be assessed until the system is live. Medr says it will “consult on its approach to defining ‘good’ learner outcomes” as part of a “coherent, over-arching approach” – but that’s after registration and implementation have begun.

    Validation arrangements are addressed (providers must ensure arrangements are effective in enabling them to satisfy themselves about quality), but the consultation asks explicitly whether the condition “could be usefully extended into broader advice or guidance for tertiary partnerships, including sub-contractual arrangements.” That suggests Medr has been reading some of England’s horror stories and recognises the area needs further work.

    And underlying everything is the question of capacity – both Medr’s capacity to operate this system effectively from day one, and providers’ capacity to meet the requirements while managing their existing obligations. The promise of reduced burden through alignment and reuse of evidence is welcome.

    But a unified regulatory system covering everything from research-intensive universities to community-based adult learning requires Medr to develop expertise and processes across an extraordinary range of provision types. Whether the organisation will be ready by August 2026 is an open question.

    For providers, the choice is whether to engage substantively with this consultation knowing that the broad architecture is set by legislation, or to focus energy on preparing for implementation. For Welsh ministers, the challenge is whether this genuinely lighter-touch, more coherent approach than England’s increasingly discredited OfS regime can be delivered without compromising quality or institutional autonomy.

    And for students – especially those whose representative structures were hoping for statutory backing – there’s a question about whether principles-based engagement without rights amounts to meaningful participation or regulatory box-ticking.

    In England, some observers will watch with interest to see whether Wales has found a way to regulate tertiary education proportionately and coherently. Others will see in these documents a reminder that unified systems, however well-intentioned, require enormous complexity to accommodate the genuine diversity of the sector. The consultation responses, due by 17 December, will expose which interpretation the Welsh sector favours.

    Source link

  • Subcontractual higher education beyond the headlines

    Subcontractual higher education beyond the headlines

    We’ve written a lot about subcontractual provision on Wonkhe, and it is fair to say that very little of it has been positive.

    What’s repeatedly hit the headlines, here and elsewhere, are the providers that teach large numbers of students in circumstances that have sparked concerns about teaching quality, academic standards, and indeed financial probity or even ethics.

    There are a fair number of students that are getting a very bad deal out of subcontractual agreements and, although we’ve been screaming about this for several years, it is good to finally see the beginnings of some action.

    Student number tools

    The long-awaited release of OfS data is not perfect – there’s lots that we’d love to see that does not appear to have been delivered. One of these is proper student numbers: it should be possible to see data on how many students are studying at each subcontracted provider at the last census point.

    Instead, we are scrabbling around with denominators and suppressions trying to build a picture of this part of the sector that is both heavily caveated and three years out of date. This isn’t good enough.

    And it is a shame. Because as well as the horror show, the data we do have offers a glimpse of a little known corner of higher education that arguably deserves to be celebrated.

    I’ve developed some new visualisations to help you explore the data – these add substantial new features to what I have previously published. Both these dashboards work in broadly the same way – the first allows you to examine relationships at delivery providers, the second at lead providers. You choose your provider of interest at the top left, which shows the various relationships on a map on the left hand side. On the right you can see denominator numbers for each year of data – you can use the filter at the top right to see information about the total number of students who might be continuing, completing, or progressing in a given year.

    Each row on the right hand side shows a combination of provider (lead provider on the first dashboard, delivery provider on the second), mode, and level – with denominators and suppression codes available in the coloured squares on the right. The suppression codes are as follows:

    • [DQ]: information suppressed due to low data quality from the 2022-23 collection
    • [low]: There are more than 2 but fewer than 23 students in the denominator
    • [none]: There are 2 students or fewer in the denominator
    • [DP]: Data redacted for reasons of data protection
    • [DPL]: Data redacted for reasons of data protection (very low numbers,
    • [DPH]: Data redacted for reasons of data protection (within 2 of the denominator)
    • [RR] below threshold response rate (for progression)
    • [BK] no benchmarks (the benchmark includes at least 50 per cent of the provider’s students)

    You can see available indicators (including upper and lower confidence intervals at 95%), benchmarks, and numeric thresholds by mousing over one of the coloured squares. The filled circle is the indicator, the outline diamond is the benchmark, and the cross is the threshold.

    [Full screen]

    [Full screen]

    A typology

    It’s worth noting the range of providers that are subcontracted to deliver higher education for others. There were an astonishing 681 of these between 2014 and 2022.

    A third of those active in delivering provision for others (227) are registered with the Office for Students in their own right. Fifty-nine of these are recognisable as universities or other established higher education providers – including 14 in the Russell Group.

    Why would that happen? In some cases, a provider may not have had the degree awarding powers necessary for research degrees, so would partner with another university to deliver particular courses. In other cases, the peculiarities of this data mean that apprenticeship arrangements are shown with the university partner. There’s also some examples of two universities working together to deliver a single programme.

    We also find many examples of longstanding collaborations between universities and teaching organisations in the arts. Numerous independent schools of dance, drama, and music have offered higher education qualifications with the support of a university – the Bird School’s relationship with the Doreen Bird College of Performing Arts began in 1997. Italia Conti used to have an arrangement with the University of East London, it now works with the University of Chichester.

    There are 135 organisations delivering apprenticeships in a relationship with an OfS-registered higher education provider. Universities often offer end point assessment and administrative support to employers and others who offer apprenticeships between level 4 and level 7.

    Two large providers – Navitas and QA – offer foundation courses and accredited year one courses for international students at UK universities: QA also offers a range of programmes aimed at home undergraduates. We could also add Into as a smaller example. This dataset probably isn’t the best place to see this (QA is shown as multiple, linked, organisations) but this is a huge area of provision

    Seventy-four subcontracted providers are schools, or school centred initial teacher training (SCITT) organisations. As teacher training has gradually moved closer to the classroom and away from the lecture hall, many schools offer opportunities to gain the industry-standard Postgraduate Certificate in Education (PGCE), which is the main route to qualified teacher status. A PGCE is a postgraduate qualification and is thus only awarded by organisations with postgraduate degree awarding powers.

    In total there are 144 providers subcontracted to deliver PGCE (initial teacher training) courses, primarily schools, local councils, and further education colleges (FECs). There are 166 FECs involved in subcontracted delivery – and this extends far beyond teacher training. Most large FECs have a university centre or similar, offering a range of foundation and undergraduate courses often (but not always) in vocational subjects. The Newcastle College Group used its experience of delivering postgraduate taught masters courses for Canterbury Christ Church University to successfully apply for postgraduate degree awarding powers – the first FEC to do so.

    We find 23 NHS organisations represented within the data. Any provider delivering medical, medical related, or healthcare subjects will have a relationship with one or more NHS foundation trust – as a means to offer student placements, and bring clinical expertise into teaching. This is generally an accreditation requirement. But in many cases, the relationship extends to the university awarding credit or qualifications for the learning and training that NHS staff do. The Oxford Health NHS Foundation Trust works with multiple providers (the University of Oxford, Oxford Brookes University, and Buckinghamshire New University), to offer postgraduate apprenticeships in clinical and non-clinical roles.

    Nine police organisations (either constabularies or police and crime commissioners) have subcontractual relationships with registered higher education providers. Teesside University works with the Chief Constable of Cleveland to offer an undergraduate apprenticeship for prospective police officers.

    All three of the UKs armed forces have subcontractual relationships with higher education providers. The British Army currently works with the University of Reading to offer undergraduate and postgraduate degrees in leadership and strategic studies – in the past it has offered a range of qualifications from Bournemouth University. Kingston University has a relationship with the Royal Navy, currently offering an MSc in Technology (Maritime Operations) undertaken entirely in the workplace.

    Ecosystem

    When I talk to people about franchise and partnership arrangements, most (perhaps thinking of the examples that make the mainstream press) ask me whether it would not be easier to simply ban such arrangements. After all, it is very difficult to see any benefit from the possibly fraudulent and often low quality behavior that is plastered all over The Times on a regular basis.

    As I think the data demonstrates, a straight-ahead ban would be hugely damaging – swathes of national priorities and achievements (from NHS staff development, to offering higher education in “cold spots”, to the quality of performances on London’s West End) would be adversely affected. But the same could be said for increases in regulatory overheads.

    There are a handful of very large providers (I’d start with Global Banking School, Elizabeth School of London, Navitas, QA, Into, London School of Science and Technology, and a few others – and from the data you’d have included Oxford Business Colleges) that are, effectively, university-like in size and scope. It is very difficult to understand why these are not registered providers given the scale of their operations (GBS, Into, and Navitas already are) and this does seem to be the right direction of travel.

    There are a clutch of medium-sized delivery providers, often in a single long-standing relationship with a higher education institution. Often, these are nano-specialisms, particularly in the creative arts or in locally important industries. In many of these cases oversight on quality and standards from the lead provider has been proven over a number of years to work well – and there seems little benefit to changing this arrangement. I would hope for this group – as is likely to happen for the FECs, SCITTs, NHS, police, and armed forces – that a change to regulatory oversight only happens where there is an issue identified.

    There is also a long tail of very small arrangements, often linked to apprenticeships (and regulated accordingly). For others at this end of the scale it is difficult to imagine OfS having the time or the capacity to regulate, so almost by default oversight remains with the lead partners. I know I say this in nearly every article, but at this end it feels like we need some kind of regular review of the way quality processes work for external providers within lead providers – we need to be sure lead providers are able to do what can be a very difficult job and do it well.

    Source link

  • Outcomes data for subcontracted provision

    Outcomes data for subcontracted provision

    In 2022–23 there were around 260 full time first degree students, registered to a well-known provider and taught via a subcontractual arrangement, that had a continuation rate of just 9.8 per cent: so of those 260 students, just 25 or so actually continued on to their second year.

    Whatever you think about franchising opening up higher education to new groups, or allowing established universities the flexibility to react to fast-changing demand or skills needs, none of that actually happens if more than 90 per cent of the registered population doesn’t continue with their course.

    It’s because of issues like this that we (and others) have been badgering the Office for Students to produce outcomes data for students taught via subcontractual arrangements (franchises and partnerships) at a level of granularity that shows each individual subcontractual partner.

    And finally, after a small pilot last year, we have the data.

    Regulating subcontractual relationships

    If anything it feels a little late – there are now two overlapping proposals on the table to regulate this end of the higher education marketplace:

    • A Department of Education consultation suggests that every delivery partner that has more than 300 higher education students would need to register with the Office for Students (unless it is regulated elsewhere)
    • And an Office for Students consultation suggests that every registering partner with more than 100 higher education students taught via subcontractual arrangements will be subject to a new condition of registration (E8)

    Both sets of plans address, in their own way, the current reality that the only direct regulatory control available over students studying via these arrangements is via the quality assurance systems within the registering (lead) partners. This is an arrangement left over from previous quality regimes, where the nation spent time and money to assure itself that all providers had robust quality assurance systems that were being routinely followed.

    In an age of dashboard-driven regulation, the fact that we have not been able to easily disaggregate the outcomes of subcontractual students has meant that it has not been possible to regulate this corner of the sector – we’ve seen rapid growth of this kind of provision under the Office for Students’ watch and oversight (to be frank) has just not been up to the job.

    Data considerations

    Incredibly, it wasn’t even the case that the regulator had this data but chose not to publish it. OfS has genuinely had to design this data collection from scratch in order to get reliable information – many institutions expressed concern about the quality of data they might be getting from their academic partners (which should have been a red flag, really).

    So what we get is basically an extension of the B3 dashboards where students in the existing “partnership” population are assigned to one of an astonishing 681 partner providers alongside their lead provider. We’d assume that each of these specific populations has data across the three B3 (continuation, completion, progression) indicators – in practice many of these are suppressed for the usual OfS reasons of low student numbers and (in the case of progression) low Graduate Outcomes response rates.

    Where we do get indicator values we also see benchmarks and the usual numeric thresholds – the former indicating what OfS might expect to see given the student population, the latter being the line beneath which the regulator might feel inclined to get stuck into some regulating.

    One thing we can’t really do with the data – although we wanted to – is treat each subcontractual provider as if it was a main provider and derive an overall indicator for it. Because many subcontractual providers have relationships (and students) from numerous lead providers, we start to get to some reasonably sized institutions. Two – Global Banking School and the Elizabeth School London – appear to have more than 5,000 higher education students: GBS is around the same size as the University of Bradford, the Elizabeth School is comparable to Liverpool Hope University.

    Size and shape

    How big these providers are is a good place to start. We don’t actually get formal student numbers for these places – but we can derive a reasonable approximation from the denominator (population size) for one of the three indicators available. I tend to use continuation as it gives me the most recent (2022–23) year of data.

    [Full screen]

    The charts showing numbers of students are based on the denominators (populations) for one of the three indicators – by default I use continuation as it is more likely to reflect recent (2022–23) numbers. Because both the OfS and DfE consultations talk about all HE students there are no filters for mode or level.

    For each chart you can select a year of interest (I’ve chosen the most recent year by default) or the overall indicator (which, like on the main dashboards is synthetic over four years) If you change the indicator you may have to change the year. I’ve not included any indications of error – these are small numbers and the possible error is wide so any responsible regulator would have to do more investigating before stepping in to regulate.

    Recall that the DfE proposal is that institutions with more than 300 higher education students would have to register with OfS if they are not regulated in another way (as a school, FE college, or local authority, for instance). I make that 26 with more than 300 students, a small number of which appear to be regulated as an FE college.

    You can also see which lead providers are involved with each delivery partner – there are several that have relationships with multiple universities. It is instructive to compare outcomes data within a delivery partner – clearly differences in quality assurance and course design do have an impact, suggesting that the “naive university hoodwinked by low quality franchise partner” narrative, if it has any truth to it at all, is not universally true.

    [Full screen]

    The charts showing the actual outcomes are filtered by mode and level as you would expect. Note that not all levels are available for each mode of study.

    This chart brings in filters for level and mode – there are different indicators, benchmarks, and thresholds for each combination of these factors. Again, there is data suppression (low numbers and responses) going on, so you won’t see every single aspect of every single relationship in detail.

    That said, what we do see is a very mixed bag. Quite a lot of provision sits below the threshold line, though there are also some examples of very good outcomes – often at smaller, specialist, creative arts colleges.

    Registration

    I’ve flipped those two charts to allow us to look at the exposure of registered universities to this part of the market. The overall sizes in recent years at some providers won’t be of any surprise to those who have been following this story – a handful of universities have grown substantially as a result of a strategic decision to engage in multiple academic partnerships.

    [Full screen]

    Canterbury Christ Church University, Bath Spa University, Buckinghamshire New University, and Leeds Trinity University have always been the big four in this market. But of the 84 registered providers engaged in partnerships, I count 44 that met the 100 student threshold for the new condition of registration B3 had it applied in 2022–23.

    Looking at the outcomes measures suggests that what is happening across multiple partners is not offering wide variation in performance, although there will always be teaching provider, subject, and population variation. It is striking that places with a lot of different partners tend to get reasonable results – lower indicator values tend to be found at places running just one or two relationships, so it does feel like some work on improving external quality assurance and validation would be of some help.

    [Full screen]

    To be clear, this is data from a few years ago (the most recent available data is from 2022–23 for continuation, 2019–20 for completion, and 2022–23 for progression). It is very likely that providers will have identified and addressed issues (or ended relationships) using internal data long before either we or the Office for Students got a glimpse of what was going on.

    A starting point

    There is clearly a lot more that can be done with what we have – and I can promise this is a dataset that Wonkhe is keen to return to. It gets us closer to understanding where problems may lie – the next phase would be to identify patterns and commonalities to help us get closer to the interventions that will help.

    Subcontractual arrangements have a long and proud history in UK higher education – just about every English provider started off in a subcontractual arrangement with the University of London, and it remains the most common way to enter the sector. A glance across the data makes it clear that there are real problems in some areas – but it is something other than the fact of a subcontractual arrangement that is causing them.

    Do you like higher education data as much as I do? Of course you do! So you are absolutely going to want to grab a ticket for The Festival of Higher Education on 11-12 November – it’s Team Wonkhe’s flagship event and data discussion is actively encouraged. 

    Source link

  • What the saga of Oxford Business College tells us about regulation and franchising

    What the saga of Oxford Business College tells us about regulation and franchising

    One of the basic expectations of a system of regulation is consistency.

    It shouldn’t matter how prestigious you are, how rich you are, or how long you’ve been operating: if you are active in a regulated market then the same rules should apply to all.

    Regulatory overreach can happen when there is public outrage over elements of what is happening in that particular market. The pressure a government feels to “do something” can override processes and requirements – attempting to reach the “right” (political or PR) answer rather than the “correct” (according to the rules) one.

    So when courses at Oxford Business College were de-designated by the Secretary of State for Education, there’s more to the tale than a provider where legitimate questions had been raised about the student experience getting just desserts. It is a cautionary tale, involving a fascinating high-court judgment and some interesting arguments about the limits of ministerial power, of what happens when political will gets ahead of regulatory processes.

    Business matters

    A splash in The Sunday Times back in the spring concerned the quality of franchised provision from – as it turned out – four Office for Students registered providers taught at Oxford Business College. The story came alongside tough language from Secretary of State for Education Bridget Phillipson:

    I know people across this country, across the world, feel a fierce pride for our universities. I do too. That’s why I am so outraged by these reports, and why I am acting so swiftly and so strongly today to put this right.

    And she was in no way alone in feeling that way. Let’s remind ourselves, the allegations made in The Sunday Times were dreadful. Four million pounds in fraudulent loans. Fake students, and students with no apparent interest in studying. Non-existent entry criteria. And, as we shall see, that’s not even as bad as the allegations got.

    De-designation – removing the eligibility of students at a provider to apply for SLC fee or maintenance loans – is one of the few levers government has to address “low quality” provision at an unregistered provider. Designation comes automatically when a course is franchised from a registered provider: a loophole in the regulatory framework that has caused concern over a number of years. Technically an awarding provider is responsible for maintaining academic quality and standards for its students studying elsewhere.

    The Office for Students didn’t have any regulatory jurisdiction other than pursuing the awarding institutions. OBC had, in fact, tried to register with OfS – withdrawing the application in the teeth of the media firestorm at the end of March.

    So everything depended on the Department for Education overturning precedent.

    Ministering

    It is “one of the biggest financial scandals universities have faced.” That’s what Bridget Phillipson said when presented with The Sunday Times’ findings. She announced that the Public Sector Fraud Authority would coordinate immediate action, and promised to empower the Office for Students to act in such cases.

    In fact, OBC was already under investigation by the Government Internal Audit Agency (GIAA) and had been since 2024. DfE had been notified by the Student Loans Company about trends in the data and other information that might indicate fraud at various points between November 2023 and February 2024 – notifications that we now know were summarised as a report detailing the concerns which was sent to DfE in January 2024. The eventual High Court judgement (the details of which we will get to shortly) outlined just a few of these allegations, which I take from from the court documents:

    • Students enrolled in the Business Management BA (Hons) course did not have basic English language skills.
    • Less than 50 per cent of students enrolled in the London campus participate, and the remainder instead pay staff to record them as in attendance.
    • Students have had bank details altered or new bank accounts opened in their name, to which their maintenance payments were redirected.
    • Staff are encouraging fraud through fake documents sent to SLC, fake diplomas, and fake references. Staff are charging students to draft their UCAS applications and personal statements. Senior staff are aware of this and are uninterested.
    • Students attending OBC do not live in the country. In one instance, a dead student was kept on the attendance list.
    • Students were receiving threats from agents demanding money and, if the students complained, their complaints were often dealt with by those same agents threatening the students.
    • Remote utilities were being used for English language tests where computers were controlled remotely to respond to the questions on behalf of prospective students.
    • At the Nottingham campus, employees and others were demanding money from students for assignments and to mark their attendance to avoid being kicked off their course.

    At the instigation of DfE, and with the cooperation of OBC, GIAA started its investigation on 19 September 2024, continuing to request information from and correspond with the college until 17 January 2025. An “interim report” detailing emerging findings went to DfE on 17 December 2024; the final report arrived on 30 January 2025. The final report made numerous recommendations about OBC processes and policies, but did not recommend de-designation. That recommendation came in a ministerial submission, prepared by civil servants, dated 18 March 2025.

    Process story

    OBC didn’t get sight of these reports until 20 March 2025, after the decisions were made. It got summaries of both the interim and final reports in a letter from DfE notifying it that Phillipson was “minded to” de-designate. The documentation tells us that GIAA reported that OBC had:

    • recruited students without the required experience and qualifications to successfully complete their courses
    • failed to ensure students met the English language proficiency as set out in OBC and lead provider policies
    • failed to ensure attendance is managed effectively
    • failed to withdraw or suspend students that fell below the required thresholds for performance and/or engagement;
    • failed to provide evidence that immigration documents, where required, are being adequately verified.

    The college had 14 days to respond to the summary and provide factual comment for consideration, during which period The Sunday Times published its story. OBC asked DfE for the underlying material that informed the findings and the subsequent decision, and for an extension (it didn’t get all the material, but it got a further five days) – and it submitted 68 pages of argument and evidence to DfE, on 7 April 2025. Another departmental ministerial submission (on 16 April 2025) recommended that the Secretary of State confirm the decision to de-designate.

    According to the OBC legal team, these emerging findings were not backed up by the full GIAA reports, and there were concerns about the way a small student sample had been used to generalise across an entire college. Most concerningly, the reports as eventually shared with the college did not support de-designation (though they supported a number of other concerns about OBC and its admission process). This was supported by a note from GIAA regarding OBC’s submission, which – although conceding that aspects of the report could have been expressed more clearly – concluded:

    The majority of the issues raised relate to interpretation rather than factual accuracy. Crucially, we are satisfied that none of the concerns identified have a material impact on our findings, conclusions or overall assessment.

    Phillipson’s decision to de-designate was sent to the college on 17 April 2025, and it was published as a Written Ministerial Statement. Importantly, in her letter, she noted that:

    The Secretary of State’s decisions have not been made solely on the basis of whether or not fraud has been detected. She has also addressed the issue of whether, on the balance of probabilities, the College has delivered these courses, particularly as regards the recruitment of students and the management of attendance, in such a way that gives her adequate assurance that the substantial amounts of public money it has received in respect of student fees, via its partners, have been managed to the standards she is entitled to expect.

    Appeal

    Oxford Business College appealed the Secretary of State’s decision. Four grounds of challenge were pursued with:

    • Ground 3: the Secretary of State had stepped beyond her powers in prohibiting OBC from receiving public funds from providing new franchised courses in the future.
    • Ground 1: the decision was procedurally unfair, with key materials used by the Secretary of State in making the decision not provided to the college, and the college never being told the criteria it was being assessed against
    • Ground 4: By de-designating courses, DfE breached OBCs rights under Article 1 of the First Protocol to the European Convention on Human Rights (to peaceful enjoyment of its possessions – in this case the courses themselves)
    • Ground 7: The decision by the Secretary of State had breached the public sector equality duty

    Of these, ground 3 was not determined, as the Secretary of State had clarified that no decision had been taken regarding future courses delivered by OBC. Ground 4 was deemed to be a “controversial” point of law regarding whether a course and its designation status could be a “possession” under ECHR, but could be proceeded with at a later date. Ground 7 was not decided.

    Ground 1 succeeded. The court found that OBC had been subject to an unfair process, where:

    OBC was prejudiced in its ability to understand and respond to the matters of the subject of investigation, including as to the appropriate sanction, and to understand the reasons for the decision.

    Judgement

    OBC itself, or the lawyers it engaged, have perhaps unwisely decided to put the judgement into the public domain – it has yet to be formally published. I say unwisely, because it also puts the initial allegations into the public domain and does not detail any meaningful rebuttal from the college – though The Telegraph has reported that the college now plans to sue the Secretary of State for “tens of millions of pounds.”

    The win, such as it is, was entirely procedural. The Secretary of State should have shared more detail of the findings of the GIAA investigation (at both “emerging” and “final” stages) in order that the college could make its own investigations and dispute any points of fact.

    Much of the judgement deals with the criteria by which a sample of 200 students were selected – OBC was not made aware that this was a sample comprising those “giving the greatest cause for suspicion” rather than a random sample, and the inability of OBC to identify students whose circumstances or behaviour were mentioned in the report. These were omissions, but nowhere is it argued by OBC that these were not real students with real experiences.

    Where allegations are made that students might be being threatened by agents and institutional staff, it is perhaps understandable that identifying details might be redacted – though DfE cited the “”pressure resulting from the attenuated timetable following the order for expedition, the evidence having been filed within 11 days of that order” for difficulties faced in redacting the report properly. On this point, DfE noted that OBC, using the materials provided, “had been able to make detailed representations running to 68 pages, which it had described as ‘comprehensive’ and which had been duly considered by the Secretary of State”.

    The Secretary of State, in evidence, rolled back from the idea that she could automatically de-designate future courses without specific reason, but this does not change the decisions she has made about the five existing courses delivered in partnership. Neither does it change the fact that OBC, having had five courses forcibly de-designated, and seen the specifics of the allegations underpinning this exceptional decision put into the public domain without any meaningful rebuttal, may struggle to find willing academic partners.

    The other chink of legal light came with an argument that a contract (or subcontract) could be deemed a “possession” under certain circumstances, and that article one section one of the European Convention on Human Rights permits the free enjoyment of possessions. The judgement admits that there could be grounds for debate here, but that debate has not yet happened.

    Rules

    Whatever your feelings about OBC, or franchising in general, the way in which DfE appears to have used a carefully redacted and summarised report to remove an institution from the sector is concerning. If the rules of the market permit behaviour that ministers do not like, then these rules need to be re-written. DfE can’t just regulate based on what it thinks the rules should be.

    The college issued a statement on 25 August, three days after the judgement was published – it claims to be engaging with “partner institutions” (named as Buckinghamshire New University, University of West London, Ravensbourne University London, and New College Durham – though all four had already ended their partnerships with the remaining students being “taught out”) about the future of the students affected by the designation decision – many had already transferred to other courses at other providers.

    In fact, the judgement tells us that of 5,000 students registered at OBC on 17 April 2025, around 4,700 had either withdrawn or transferred out of OBC to be taught out. We also learn that 1,500 new students, who had planned to start an OBC-delivered course after 2025, would no longer be doing so. Four lead providers had given notice to terminate franchise agreements between April 2024 and May of 2025. Franchise discussions with another provider – Southampton Solent University – underway shortly before the decision to de-designate, had ended.

    OBC currently offers one course itself (no partnership offers are listed) – a foundation programme covering academic skills and English language including specialisms in law, engineering, and business – which is designed to prepare students for the first year of an undergraduate degree course. It is not clear what award this course leads to, or how it is regulated. It is also expensive – a 6 month version (requiring IELTS 5.5 or above) costs an eyewatering £17,500. And there is no information as to how students might enroll on this course.

    OBC’s statement about the court case indicates that it “rigorously adheres to all regulatory requirements”, but it is not clear which (if any) regulator has jurisdiction over the one course it currently advertises.

    If there are concerns about the quality of teaching, or about academic standards, in any provider in receipt of public funds they clearly need to be addressed – and this is as true for Oxford Business College as it is for the University of Oxford. This should start with a clear plan for quality assurance (ideally one that reflects the current concerns of students) and a watertight process that can be used both to drive compliance and take action against those who don’t measure up. Ministerial legal innovation, it seems, doesn’t quite cut it.

    Source link

  • Is there a place for LEO in regulation?

    Is there a place for LEO in regulation?

    The OfS have, following a DfE study, recently announced a desire to use LEO for regulation. In my view this is a bad idea.

    Don’t get me wrong, the Longitudinal Outcomes from Education (LEO) dataset is a fantastic and under-utilised tool for historical research. Nothing can compare to LEO for its rigour, coverage and the richness of the personal data it contains.

    However, it has serious limitations, it captures earnings and not salary, for everyone who chooses to work part time it will seriously underestimate the salary they command.

    And fundamentally it’s just too lagged. You can add other concerns around those choosing not to work and those working abroad if you wish to undermine its utility further.

    The big idea

    The OfS is proposing using data from 3 years’ after graduation which I assume to mean the third full tax year after graduation although it could mean something different, no details are provided. Assuming that my interpretation is correct the most recent LEO data published in June this year relates to the 2022-23 tax year so for that to be the third full tax year after graduation (that’s the that’s the 2018-19 graduating cohort, and even if you go for the third tax year including the one they graduated in it’s the 2019-20 graduates). The OfS also proposes to continue to use 4 year aggregates which makes a lot of sense to avoid statistical noise and deal with small cohorts but it does mean that some of the data will relate to even earlier cohorts.

    The problem is therefore if the proposed regime had been in place this year the OfS would have just got its first look at outcomes from the 2018-19 graduating cohort who were of course entrants in 2016-17 or earlier. When we look at it through this lens it is hard to see how one applies any serious regulatory tools to a provider failing on this metric but performing well on others especially if they are performing well on those based on the still lagged but more timely Graduate Outcomes survey.

    It is hard to conceive of any courses that will not have had at least one significant change in the 9 (up to 12!) years since the measured cohort entered. It therefore won’t be hard for most providers to argue that the changes they have made since those cohorts entered will have had positive impacts on outcomes and the regulator will have to give some weight to those arguments especially if they are supported by changes in the existing progression, or the proposed new skills utilisation indicator.

    A problem?

    And if the existing progression indicator is problematic then why didn’t the regulator act on it when it had it four years earlier? The OfS could try to argue that it’s a different indicator capturing a different aspect of success but this, at least to this commentators mind, is a pretty flimsy argument and is likely to fail because earnings is a very narrow definition of success. Indeed, by having two indicators the regulator may well find themselves in a situation where they can only take meaningful action if a provider is failing on both.

    OfS could begin to address the time lag by just looking at the first full tax year after graduation but this will undoubtedly be problematic as graduates take time to settle into careers (which is why GO is at 15 months) and of course the interim study issues will be far more significant for this cohort. It would also still be less timely than the Graduate Outcomes survey which itself collects the far more meaningful salary rather than earnings.

    There is of course a further issue with LEO in that it will forever be a black box for the providers being regulated using it. It will not be possible to share the sort of rich data with providers that is shared for other metrics meaning that providers will not be able to undertake any serious analysis into the causes of any concerns the OfS may raise. For example, a provider would struggle to attribute poor outcomes to a course they discontinued, perhaps because they felt it didn’t speak to the employment market. A cynic might even conclude that having a metric nobody can understand or challenge is quite nice for the OfS.

    The use of LEO in regulation is likely to generate a lot of work for the OfS and may trigger lots of debate but I doubt it will ever lead to serious negative consequences as the contextual factors and the fact that the cohorts being considered are ancient history will dull, if not completely blunt, the regulatory tools.

    Richard Puttock writes in a personal capacity.

    Source link

  • Testing Times & Interesting Discussions

    Testing Times & Interesting Discussions

    Last week, The Royal Bank of Canada (RBC) put out a discussion paper called Testing Times: Fending Off A Crisis in Post-Secondary Education, which in part is the outcome of a set of cross-country discussions held this summer by RBC, HESA, and the Business Higher Education Roundtable. (BHER). The paper, I think, sums up the current situation pretty well: the system is not at a starvation point but is heading in that direction pretty quickly and that needs to be rectified. On the other hand, there are some ways that institutions could be moving more quickly to respond to changing social and economic circumstances. What’s great about this paper is that it balances those two ideas pretty effectively.

    I urge everyone to read it themselves because I think it sums up a lot of issues nicely – many of which we at HESA will be taking up at our Re: University conference in January (stay tuned! the nearly full conference line-up will be out in a couple of weeks, and it’s pretty exciting). But I want to draw everyone’s attention to section 4 of the report, in particular which I think is the sleeper issue of the year, and that is the regulation of post-secondary institutions. One of the things we heard a lot on the road was how universities were being hamstrung – not just by governments but by professional regulatory bodies – in terms of developing innovative programming. This is a subject I’ll return to in the next week or two, but I am really glad that this issue might be starting to get some real traction.

    The timing of this release wasn’t accidental: it came just a few days before BHER had one of its annual high-level shindigs, and RBC’s CEO Dave MacKay is also BHER’s Board Chair, so the two go hand-in-hand to some extent. I was at the summit on Monday – a Chatham House rules session at RBC headquarters – which attracted a good number of university and college presidents, as well as CEOs – entitled Strategic Summit on Talent, Technology and a New Economic Order. The discussions took up the challenge in the RBC paper to look at where the country is going and where the post-secondary education sector can contribute to making a new and stronger Canada.

    And boy, was it interesting.

    I mean, partly it was some of the outright protectionist stuff being advocated by the corporate sector in the room. I haven’t heard stuff like that since I was a child. Basically, the sentiment in the room is that the World Trade Organization (WTO) is dead, the Americans aren’t playing by those rules anymore, so why should we? Security of supply > low-cost supply. Personally, I think that likely means that this “new economic order” is going to mean much more expensive wholesale prices, but hey, if that’s what we have to adapt to, that’s what we have to adapt to.

    But, more pertinent to this blog were the ways the session dealt with the issue of what in higher education needs to change to meet the moment. And, for me, what was interesting was that once you get a group of business folks in a room and ask what higher education can do to help get the country on track, they actually don’t have much to say. They will talk a LOT about what government can do to help get the country on track. The stories they can tell about how much more ponderous and anti-innovation Canadian public procurement policies are compared to almost any other jurisdiction on earth would be entertaining if the implications were not so horrific. They will talk a LOT about how Canadian C-suites are risk-averse, almost as risk-averse as government, and how disappointing that is.

    But when it comes to higher education? They don’t actually have all that much to say. And that’s both good and bad.

    Now before I delve into this, let me say that it’s always a bit tricky to generalize what a sector believes based on a small group of CEOs who get drafted into a room like this one. I mean, to some degree these CEOs are there because they are interested in post-secondary education, so they aren’t necessarily very representative of the sector. But here’s what I learned:

    • CEOs are a bit ruffled by current underfunding of higher education. Not necessarily to the point where they would put any of their own political capital on the line, but they are sympathetic to institutions.
    • When they think about how higher education affects their business, CEOs seem to think primarily about human capital (i.e. graduates). They talk a lot less about research, which is mostly what universities want to talk about, so there is a bit of a mismatch there.
    • When they think about human capital, what they are usually thinking about is “can my business have access to skills at a price I want to pay?” Because the invitees are usually heads of successful fast-growing companies, the answer is usually no. Also, most say what they want are “skills” – something they, not unreasonably, equate with experience, which sets up another set of potential misunderstandings with universities because degrees ≠ experience (but it does mean everyone can agree on more work-integrated learning).
    • As a result – and this is important here – it’s best if CEOs think about post-secondary education in terms of firm growth, not in terms of economy-wide innovation.

    Now, maybe that’s all right and proper – after all, isn’t it government’s business to look after the economy-wide stuff? Well, maybe, but here’s where it gets interesting. You can drive innovation either by encouraging the manufacture and circulation of ideas (i.e. research) or by diffusing skills through the economy (i.e. education/training). But our federal government seems to think that innovation only happens via the introduction of new products/technology (i.e., the product of research), and that to the extent there is an issue with post-secondary education, it is that university-based research doesn’t translate into new products fast enough – i.e. the issue is research commercialization. The idea that technological adoption might be the product of governments and firms not having enough people to use new technologies properly (e.g. artificial intelligence)? Not on anyone’s radar screen.

    And that really is a problem. One I am not sure is easily fixed because I am not sure everyone realizes the degree to which they are talking past each other. But that said, the event was a promising one. It was good to be in a space where so many people cared about Canada, about innovation, and about post-secondary education. And the event itself – very well pulled-off by RBC and BHER – made people want to keep discussing higher education and the economy. Both business and higher education need to have events like this one, regularly, and not just nationally but locally as well. The two sides don’t know each other especially well, and yet their being more in sync is one of the things that could make the country work a lot better than it does. Let’s keep talking.

    Source link