Tag: regulation

  • Inquiry asks how regulation can be streamlined – Campus Review

    Inquiry asks how regulation can be streamlined – Campus Review

    The leaders of the merged Adelaide University told senators compliance costs are taking away from spending on research and students at a federal governance inquiry on Monday.

    Please login below to view content or subscribe now.

    Membership Login

    Source link

  • The white paper on regulation

    The white paper on regulation

    The Office for Students is a creation of the 2017 Higher Education and Research Act, but this legislation was not the last word on the matter.

    It has gained new powers and new responsibilities over the years, and – looking closely at the white paper – it looks set to expand its powers, capabilities, and capacity even further.

    As the Department for Education, and as ministers and politicians more generally, make new demands of the regulator it needs to be given the power to meet these demands. And this generally needs to happen via the amendment of HERA, which almost always requires further primary legislation.

    It is clear that much of the change that ministers expect to see in the higher education sector – as set out via the white paper – needs to happen via the action of the regulator.

    Regulation, rebooted

    The 2022 Skills and Post-16 Education Act gave OfS explicit powers to assess the quality of higher education with reference to student outcomes, protection from defamation claims based on regulatory decisions, and the duty to publish details of investigations.

    The 2023 Higher Education (Freedom of Speech) Act, alongside the various measures linked directly to freedom of speech and academic freedom, attempted to grant OfS the power to monitor overseas funding – this was, in the end, not enacted.

    These decisions to give OfS new powers and new duties will have been influenced by legal embarrassment (the student outcomes and defamation issues) and perceived threats (such as on overseas funding or freedom of speech), but measures are generally finessed in conversation with the regulator and its own assessment of powers that it needs.

    It is fair to assume that OfS – famously careful around legal risk – would largely prefer to have more powers rather than less. The diversity of the sector, and the range of political and public concern about what providers actually get up to, mean that the regulator may often feel pressured to act in ways that it is not, technically, permitted to. This is not a risk limited to OfS – witness the Department for Education’s legal travails regarding Oxford Business College.

    Aspects requiring primary legislation

    The white paper offers the Office for Students a number of new powers to do things which – on the face of it – it can already do and has already done. What we need to keep an eye on here is where the amping up of these existing powers happens in a way that overrides safeguards that exist to prevent arbitrary and unfair regulatory action. It is already troubling that, unlike pretty much anyone else, the Office for Students is legally unable to defame a provider (for example by posting details of an investigation including details that are later shown to be false).

    Quality

    The Department for Education seems to labour under the misconception that OfS cannot restrict a provider’s ability to recruit on the basis of “poor quality”. It can – and has done so four times since the regulator was established. Nonetheless, the government will legislate “when parliamentary time allows” to give OfS these powers again using slightly different words – and probably modifying sections 5 and 6 of HERA to allow it to do so (currently, the Secretary of State cannot give OfS guidance that relates to the recruitment and admission of students).

    This would be part of a wider portfolio of new powers for OfS, allowing it to intervene decisively to tackle poor quality provision (including within franchise arrangements), prevent the abuse of public money at registered providers, and safeguard against provision with poor outcomes for students).

    Again – these are powers, in the broadest sense, the OfS already has. It has already intervene to tackle low quality provision (including poor quality outcomes for students) via the B3 and other B condition investigations and linked regulatory rulings. And it has already intervened on franchise arrangements (most recently opening an investigation into the arrangement between Bath Spa University and the Fairfield School of Business).

    There will be a strengthening of powers to close down provision where fraud or the misuse of public funds is identified – and here it is fair to read across to concerns about franchise provision and the work of (“unscrupulous”) UK recruitment agents. Condition E8 – which specifically addresses the wider issues of fraud and misuse of public funds, currently applies only to new registrants: it is fair to ask why extending this to currently registered providers is not under consideration as a non-legislative approach. Clearly the infamous powers of entry and search (HERA section 61) and the power to require information from unregistered providers (HERA section 62) are not cutting it.

    Linked to these, OfS will be able to build capacity to carry out more investigations and to do so at greater speed – for which in the first part we should read that OfS will get more money from DfE. It already gets roughly £10m each year, which covers things like running TEF and administering the freedom of speech complaints scheme – this is on top of around £32m in registration fees from the sector (also public money) which sounds like a lot but doesn’t even cover staff costs at OfS. We are awaiting a consultation on OfS registration fees for providers for the future, so it is possible this situation may change.

    OfS’ proposed new quality regime is centred around TEF, a “section 25” scheme in the language of HERA. Schedule 2, section 2, of HERA is clear that a section 25 scheme can be used to vary the fee cap for individual providers. Indeed, it is currently used to vary the cap – if you don’t have a TEF award (at all) you can only charge a maximum of £9,275 next year. So no fancy legislative changes would be required to make fee uplifts conditional on a “higher quality threshold” if you happened to believe that a provider’s income per student should be determined by outcomes data from a decade ago.

    Not strictly speaking “quality”, but OfS will also get stronger regulatory power to take robust action against providers that breach their duties under the Higher Education (Freedom of Speech) Act – beyond even fining providers (as it has recently done to the University of Sussex) and deregistering (or adding a condition of registration via conditions E1 and E2), a power it has had since HERA was passed. I’m not sure what would constitute more robust action than that.

    Access and participation

    The access and participation plan (APP) regime is a remnant of the work of the former Office for Fair Access (OFFA). The Higher Education Act 2004 gave this body the ability to call for and assess “access agreements”, with the approval of OFFA needed for a provider to charge higher fees. Section 29 of HERA gave the impression that handed these powers directly over to the Office for Students – but in actuality it gave a lot more direct power to the Secretary of State to specify the content of plans and the way they are assessed via regulations.

    The proposals in the white paper look for a risk-based approach to APP, but at provider level – not the more general risks associated with particular groups of students that we find in the OfS’ current approach. Providers that do well at access and participation will benefit from streamlined regulation, for those that do not the experience may involve a little more pain.

    The big change is that access and participation will now look in a lot more detail at postgraduate provision and the postgraduate student experience. And section 32(5)(b) of HERA specifically prohibits plans from addressing “education provided by means of any postgraduate course other than a course of initial teacher training”. So we could expect some kind of legislative action (it may be possible to do via regulations but if there is a bill coming then why not?) to address this issue. And besides that, there will be a load of regulations and guidance from the Secretary of State setting out what she would like John Blake or his successor to do.

    Aspects requiring changes to the regulatory framework

    Registration

    In what is fast becoming a more closely coupled tertiary sector, OfS is set to become a primary regulator for every provider of higher education. There are three sets of providers that will be affected by this move:

    • Further education colleges (FECs) delivering higher education (courses at levels 4 and above)
    • Other providers delivering provision currently funded via Advanced Learner Loans (ALL)
    • Other providers designated for student loan support, including those delivering courses via franchise and partnership arrangements.

    In each of these cases, provision that is to all intents and purposes higher education is currently delivered without the direct oversight of the higher education regulator. This may be delivered with the oversight and approval of a higher education provider (franchise and partnership provision), or with the oversight of Ofqual (there are hundreds of these).

    The regulation of this kind of provision within FECs is probably best understood – as things stand all of the fundamental regulation of these bodies (stuff like governance and financial planning) happens via the Department for Education, which took on this role from the Education and Skills Funding Agency when it was abolished on 31 March 2025. The Department then provides assurance to the Office for Students and data to HESA.

    Designation for student support nominally happens via a decision made by the Secretary of State (section 84 of HERA) – in practice this happens by default for anyone delivering higher education. As we saw in the Oxford Business College case, arrangements like this are predicated on the assumption that what we might call regulation (quality and standards, and also – I guess – fit and proper person type stuff) is pushed onto the validating organisation with varying degrees of confidence

    Advanced Learner Loan (ALL) funded provision, confusingly, is technically further education (level 3 and up) but the logic of the machinery of the Lifelong Learning Entitlement wants to bring the upper end of this provision into the ambit of OfS. There was initially supposed to be a separate category of registration for ALL provision with OfS, this plan has been scrapped.

    We’ve known informally that it was unlikely to happen for some time, but it was actually the white paper that put the nail in the coffin. OfS will be consulting, this autumn, on the disapplication of certain conditions of registration for providers in the further education sector – though this shift will be a slow process, with current ALL arrangements extending through to 2030. But this consultation is very likely to extend much wider – recall that OfS is also tasked with a more robust approach to market entry (which, again, would be done via registration).

    Likewise, OfS has been tasked with toughening up the (E) conditions on governance, and the (D) conditions on financial sustainability (which would include forming a system-wide view of sector resilience working with UKRI) – we’ve seen evidence of a rethought approach to governance in the new conditions (E7 and E9) for initial registration, and have suspected that a further consultation would apply this to more providers.

    Degree awarding powers

    The ability to award your own qualifications is an important reputational stepping stone for any provider entering the higher education sector. It has an impact on the ability to design and run new courses, and also brings a financial benefit – no need to pay capitation on fee income to your academic partners. While quality and standards play a role in OfS registration decisions, these two aspects of provision are central to assessment for degree awarding powers as expressed via:

    An emerging self-critical, cohesive academic community with a clear commitment to the assurance of standards supported by effective (in prospect) quality systems.

    The current system (as of 1 April 2023) is run by the Office for Students after the decision of the QAA to demit from the role of Designated Quality Body. There are aspects that deal with student protection, financial probity, and arrangements for progression dealt with as a precursor to a full assessment – and here OfS looks for evidence that courses have been developed and approved in accordance with sector recognised standards: currently copy-pasted from the QAA’s (2014) framework for higher education qualifications and the UKSCQA degree classification descriptions (2019).

    When this arrangement was set up back in 2022 it was somewhat controversial. There was no sign of the sector recognised standard that is the QAA Quality Code, and seemingly no mechanism to update the official list of standards recognised by the sector as they are restated elsewhere. There is a mention of sector recognised standards in HERA, but these need to be determined by “persons representing a broad range of registered higher education providers” and “command the confidence of registered higher education providers”.

    External examiners are not mentioned in the sector recognised standards (despite being a standard that is recognised by the sector), but are mentioned in DAPs criterion B3k on the quality of the academic experience, in C1g on allowing academics to be external examiners elsewhere to gain experience (which F1i clarifies should be a third of academic staff where research degrees are offered). If you are applying for full DAPs you need to send OfS a sample of external examiner reports.

    In the white paper it is suggested that government is not convinced of the value of external examination – here’s the charge sheet:

    • We will consider the extent to which recent patterns of improving grades can be explained by an erosion of standards, rather than improved teaching or assessment practices
    • We will also continue to build the evidence base on the effectiveness or otherwise of the external examining system, which we will feed into the Office for Students’ programme for reform
    • We will also seek employers’ views about whether the academic system is giving graduates the skills and knowledge they need for the workplace.

    Of course, this sails worryingly close to devolved issues, as the external examiner infrastructure extends far beyond England: it is a requirement, supported by the sector, in Wales, Scotland, and Northern Ireland. External examiners do not often have any input into awarded degree classifications (that’s degree algorithms that are set internally by providers) so are not particularly likely to be a determining factor in more people getting a first.

    Indeed, the sector came together (back in 2022) to publish a set of External Examining Principles which exist as an annex to the statement of intent on degree classifications that forms a part of the OfS’s “sector-recognised standards.” It’s a good read for anyone who does not have a full understanding of the role of external examiners, both within institutional processes and those of the many professional, statutory, and regulatory bodies (PSRBs).

    This isn’t yet at the point of a consultation, just work the Office for Students is doing to update the process – a body of work that will also establish the concept and process of granting Higher Technical Qualification awarding powers. But we should watch some of the language around the next release of the OfS’ monitoring work on grade inflation – especially as the 2022 insight brief highlighted UUK work to strengthen the external examiner system as a key tool to address the issue.

    Other new responsibilities

    White papers generally try to make changes to the provision of applicant information – we have the 2004 white paper to thank for what is now known as Discover Uni, and the 2015 white paper put forward a simple precious metals based system that we have come to love as the Teaching Excellence Framework. Thankfully this time round it is a matter of incorporating Discover Uni style data onto UCAS course pages (which, and honestly I’m sorry to keep doing this) you can already find in a “student outcomes” section operated by the Office for Students. The white paper asks for continuation data to be added to this widget – I imagine not a huge piece of work.

    It’s 2025, so every document has to mention what is popularly known as “artificial intelligence” and we more accurately describe as generative large language models. A few paragraphs tacked on to the end of the white paper ask OfS to assess the impact of such tools on assessments and qualifications – adding, almost in passing, that it expects that “all students will learn about artificial intelligence as part of their higher education experience”. In direct, but light-hearted I am sure, contravention of section 8 (a)(i) of HERA, which says that providers are free to determine the content of courses.

    Which brings us to Progress 8 – a measure used in schools policy which adds together pupils’ highest scores from eight (hence the name I suppose) GCSEs that the government thinks are important (English and maths, plus “English Baccalaureate” subjects like: sciences, history, geography, languages) and produces a cohort average used to compare schools (here it is called “Attainment 8”) and compare average performance pupils in a given school cohort with how they did in simpler subjects at primary schools as a kind of value added measure (“Progress 8”). In other words, DfE looking in the white paper to work with OfS to build Progress 8 but for higher education is another stab at learning gain measures – something we’ve been investigating since the days of HEFCE and have never been shown to work on a national scale.

    Trust and confidence

    Regulation works via the consent of the regulated. Everyone from Universities UK down has been at pains to point out that they do see the value of higher education regulation, even if it was expressed in kind of a “more in sorrow than in anger” way at the primal therapy that was the House of Lords Industry and Regulator Committee.

    But this agreement over value is determined by a perception that the actions of the regulator are fair, predictable, and proportionate. These qualities can be seen by inexperienced regulators as a block to speedy and decisive action, but the work OfS has done to reset what was initially a very fractious relationship with the sector (and associated bodies) suggests that the importance of consensual regulation is fully understood on Lime Kiln Close.

    Every time the OfS gets, or asks for, new powers it affects the calculus of value to the sector. Here it is less a matter of new powers and more an issue of strengthening and extending existing powers (despite the occasionally confused language of the white paper). Everyone involved is surely aware that a strong power is a power that is perceived as fair – and is challengeable when it appears to be unfair. The occasional lawsuits OfS (and DfE) have faced have happened when someone is keen to do the right thing but has not gone about it in the right way.

    The coming consultations – ahead of legislation and changes to the framework – need to be genuine listening exercises, even if this means adding the kind of nuance that slows things down, or reflecting on the powers OfS already has and looking for ways to improve their effective use.

    Source link

  • Can regulation cope with a unified tertiary system in Wales?

    Can regulation cope with a unified tertiary system in Wales?

    Medr’s second consultation on its regulatory framework reminds us both of the comparatively small size of the Welsh tertiary sector, and the sheer ambition – and complexity – of bringing FE, HE, apprenticeships and ACL under one roof.

    Back in May, Medr (the official name for the Commission for Tertiary Education and Research in Wales) launched its first consultation on the new regulatory system required by the Tertiary Education and Research Wales Act 2022.

    At that stage the sector’s message was that it was too prescriptive, too burdensome, and insufficiently clear about what was mandatory versus advisory.

    Now, five months later, Medr has returned with a second consultation that it says addresses those concerns. The documents – running to well over 100 pages across the main consultation text and six annexes – set out pretty much the complete regulatory framework that will govern tertiary education in Wales from August 2026.

    It’s much more than a minor technical exercise – it’s the most ambitious attempt to create a unified regulatory system across further education, higher education, apprenticeships, adult community learning and maintained school sixth forms that the UK has yet seen.

    As well as that, it’s trying to be both a funder and a regulator; to be responsive to providers while putting students at the centre; and to avoid some of the mistakes that it has seen the Office for Students (OfS) make in England.

    Listening and responding

    If nothing else, it’s refreshing to see a sector body listening to consultation responses. Respondents wanted clearer signposts about what constitutes a compliance requirement versus advisory guidance, and worried about cumulative burden when several conditions and processes come together.

    They also asked for alignment with existing quality regimes from Estyn and the Quality Assurance Agency, and flagged concerns about whether certain oversight might risk universities’ status as non-profit institutions serving households (NPISH) – a technical thing, but one with significant implications for institutional autonomy.

    Medr’s response has been to restructure the conditions more clearly. Each now distinguishes between the condition itself (what must be met), compliance requirements that evidence the condition, and guidance (which providers must consider but may approach differently if they can justify that choice).

    It has also adopted a “make once, use many” approach to information, promising to rely on evidence already provided to Estyn, QAA or other bodies wherever it fits their purpose. And it has aligned annual planning and assurance points with sector cycles “wherever possible.”

    The question, of course, is whether this constitutes genuine simplification or merely better-organised complexity. Medr is establishing conditions of registration for higher education providers (replacing Fee and Access Plans), conditions of funding for FE colleges and others, and creating a unified quality framework and learner engagement code that applies across all tertiary education.

    The conditions themselves

    Some conditions apply universally. Others apply only to registered providers, or only to funded providers, or only to specific types of provision. As we’ve seen in England, the framework includes initial and ongoing conditions of registration for higher education providers (in both the “core” and “alternative” categories), plus conditions of funding that apply more broadly.

    Financial sustainability requires providers to have “strategies in place to ensure that they are financially sustainable” – which means remaining viable in the short term (one to two years), sustainable over the medium term (three to five years), and maintaining sufficient resources to honour commitments to learners. The supplementary detail includes a financial commitments threshold mechanism based on EBITDA ratios.

    Providers exceeding certain multiples will need to request review of governance by Medr before entering new financial commitments. That’s standard regulatory practice – OfS has equivalent arrangements in England – but it represents new formal oversight for Welsh institutions.

    Critically, Medr says its role is “to review and form an opinion on the robustness of governance over proposed new commitments, not to authorise or veto a decision that belongs to your governing body.” That’s some careful wording – but whether it will prove sufficient in practice (both in detail and in timeliness) when providers are required to seek approval before major financial decisions remains to be seen.

    Governance and management is where the sector seems to have secured some wins. The language around financial commitments has been softened from “approval” to “review.” The condition now focuses on outcomes – “integrity, transparency, strong internal control, effective assurance, and a culture that allows challenge and learning” – rather than prescribing structures.

    And for those worried about burden, registered higher education providers will no longer be required to provide governing body composition, annual returns of serious incidents, individual internal audit reports, or several other elements currently required under Fee and Access Plans. That is a reduction – but won’t make a lot of difference to anyone other than the person stiffed with gathering the sheaf of stuff to send in.

    Quality draws on the Quality Framework (Annex C) and requires providers to demonstrate their provision is of good quality and that they engage with continuous improvement. The minimum compliance requirements, evidenced through annual assurance returns, include compliance with the Learner Engagement Code, using learner survey outcomes in quality assurance, governing body oversight of quality strategies, regular self-evaluation, active engagement in external quality assessment (Estyn inspection and/or QAA review), continuous improvement planning, and a professional learning and development strategy.

    The framework promises that Medr will “use information from existing reviews and inspections, such as by Estyn and QAA” and “aim not to duplicate existing quality processes.” Notably, Medr has punted the consultation on performance indicators to 2027, so providers won’t know what quantitative measures they’ll be assessed against until the system is already live.

    Staff and learner welfare sets out requirements for effective arrangements to support and promote welfare, encompassing both “wellbeing” (emotional wellbeing and mental health) and “safety” (freedom from harassment, misconduct, violence including sexual violence, and hate crime). Providers will have to conduct an annual welfare self-evaluation and submit an annual welfare action plan to Medr. This represents new formal reporting – even if the underlying activity isn’t new.

    The Welsh language condition requires providers to take “all reasonable steps” to promote greater use of Welsh, increase demand for Welsh-medium provision, and (where appropriate) encourage research and innovation activities supporting the Welsh language. Providers must publish a Welsh Language Strategy setting out how they’ll achieve it, with measurable outcomes over a five-year rolling period with annual milestones. For providers subject to Welsh Language Standards under the Welsh Language (Wales) Measure 2011, compliance with those standards provides baseline assurance. Others must work with the Welsh Language Commissioner through the Cynnig Cymraeg.

    Learner protection plans will be required when Medr gives notice – typically triggered by reportable events, course closures, campus closures, or significant changes to provision. The guidance (in the supplementary detail from page 86 onwards) is clear about what does and doesn’t require a plan. Portfolio review and planned teach-out? Generally fine, provided learners are supported. Closing a course mid-year with no teach-out option? Plan required. Whether this offers the sort of protection that students need – especially when changes are made to courses to reduce costs – will doubtless come up in the consultation.

    And then there’s the Learner Engagement Code, set out in Annex D. This is where student representative bodies may feel especially disappointed. The Code is principles-based rather than rights-based, setting out nine principles (embedded, valued, understood, inclusive, bilingual, individual and collective, impactful, resourced, evaluated) – but creates no specific entitlements or rights for students or students’ unions.

    The principles themselves are worthy enough – learners should have opportunities to engage in decision-making, they should be listened to, routes for engagement should be clear, opportunities should reflect diverse needs, learners can engage through Welsh, collective voice should be supported, engagement should lead to visible impact, it should be resourced, and it should be evaluated. But it does all feel a bit vague.

    Providers will have to submit annual assurance that they comply with the Code, accompanied by evidence such as “analysis of feedback from learners on their experience of engagement” and “examples of decisions made as a result of learner feedback.” But the bar for compliance appears relatively low. As long as providers can show they’re doing something in each area, they’re likely to be deemed compliant. For SUs hoping for statutory backing for their role and resources, this will feel like a missed opportunity.

    Equality of opportunity is more substantial. The condition requires providers to deliver measurable outcomes across participation, retention, academic success, progression, and (where appropriate) participation in postgraduate study and research. The supplementary detail (from page 105) sets out that providers must conduct ongoing self-evaluation to identify barriers to equality of opportunity, then develop measurable outcomes over a five-year rolling period with annual milestones.

    Interestingly, there’s a transition period – in 2026-27, HE providers with Fee and Access Plans need only provide a statement confirming continued commitments. Full compliance – including submission of measurable outcomes – isn’t required until 2027-28, with the first progress reports due in 2028-29. That’s a sensible approach given the sector’s starting points vary considerably, but it does mean the condition won’t bite with full force for three years.

    Monitoring and intervention

    At the core of the monitoring approach is an Annual Assurance Return – where the provider’s governing body self-declares compliance across all applicable conditions, supported by evidence. This is supplemented by learner surveys, Estyn/QAA reviews, public information monitoring, complaints monitoring, reportable events, data monitoring, independent assurance, engagement activities and self-evaluation.

    The reportable events process distinguishes between serious incidents (to be reported within 10 working days) and notifiable events (reported monthly or at specified intervals). There’s 17 categories of serious incidents, from loss of degree awarding powers to safeguarding failures to financial irregularities over £50,000 or two per cent of turnover (whichever is lower). A table lists notifiable events including senior staff appointments and departures, changes to validation arrangements, and delays to financial returns. It’s a consolidation of existing requirements rather than wholesale innovation, but it’s now formalised across the tertiary sector rather than just HE.

    Medr’s Statement of Intervention Powers (Annex A) sets out escalation from low-level intervention (advice and assistance, reviews) through mid-level intervention (specific registration conditions, enhanced monitoring) to serious “directive” intervention (formal directions) and ultimately de-registration. The document includes helpful flowcharts showing the process for each intervention type, complete with timescales and decision review mechanisms. Providers can also apply for a review by an independent Decision Reviewer appointed by Welsh Ministers – a safeguard that universities dream of in England.

    Also refreshingly, Medr commits to operating “to practical turnaround times” when reviewing financial commitments, with the process “progressing in tandem with your own processes.” A six-week timeline is suggested for complex financing options – although whether this proves workable in practice will depend on Medr’s capacity and responsiveness.

    Quality

    The Quality Framework (Annex C) deserves separate attention because it’s genuinely attempting something ambitious – a coherent approach to quality across FE, HE, apprenticeships, ACL and sixth forms that recognises existing inspection/review arrangements rather than duplicating them.

    The framework has seven “pillars” – learner engagement, learner voice, engagement of the governing body, self-evaluation, externality, continuous improvement and professional learning and development. Each pillar sets out what Medr will do and what providers must demonstrate. Providers will be judged compliant if they achieve “satisfactory external quality assessment outcomes,” have “acceptable performance data,” and are not considered by Medr to demonstrate “a risk to the quality of education.”

    The promise is that:

    …Medr will work with providers and with bodies carrying out external quality assessment to ensure that such assessment is robust, evidence-based, proportionate and timely; adds value for providers and has impact in driving improvement.

    In other words, Estyn inspections and QAA reviews should suffice, with Medr using those outcomes rather than conducting its own assessments. But there’s a caveat:

    “Medr has asked Estyn and QAA to consider opportunities for greater alignment between current external quality assessment methodologies, and in particular whether there could be simplification for providers who are subject to multiple assessments.

    So is the coordination real or aspirational? The answer appears to be somewhere in between. The framework acknowledges that by 2027, Medr expects to have reviewed data collection arrangements and consulted on performance indicators and use of benchmarking and thresholds. Until that consultation happens, it’s not entirely clear what “acceptable performance data” means beyond existing Estyn/QAA judgements. And the promise of “greater alignment” between inspection methodologies is a promise, not a done deal.

    A tight timeline

    The key dates bear noting because they’re tight:

    • April 2026: Applications to the register open
    • August 2026: Register launches; most conditions come into effect
    • August 2027: Remaining conditions (Equality of Opportunity and Fee Limits for registered providers) come into full effect; apprenticeship providers fully subject to conditions of funding

    After all these years, we seem to be looking at some exit acceleration. It gives providers approximately six months from the consultation closing (17 December 2025) to the application process opening. Final versions of the conditions and guidance presumably need to be published early 2026 to allow preparation time. And all of this is happening against the backdrop of Senedd elections in 2026 – where polls suggest that some strategic guidance could be dropped on the new body fairly sharpish.

    And some elements remain unresolved or punted forward. The performance indicators consultation promised for 2027 means providers won’t know the quantitative measures against which they’ll be assessed until the system is live. Medr says it will “consult on its approach to defining ‘good’ learner outcomes” as part of a “coherent, over-arching approach” – but that’s after registration and implementation have begun.

    Validation arrangements are addressed (providers must ensure arrangements are effective in enabling them to satisfy themselves about quality), but the consultation asks explicitly whether the condition “could be usefully extended into broader advice or guidance for tertiary partnerships, including sub-contractual arrangements.” That suggests Medr has been reading some of England’s horror stories and recognises the area needs further work.

    And underlying everything is the question of capacity – both Medr’s capacity to operate this system effectively from day one, and providers’ capacity to meet the requirements while managing their existing obligations. The promise of reduced burden through alignment and reuse of evidence is welcome.

    But a unified regulatory system covering everything from research-intensive universities to community-based adult learning requires Medr to develop expertise and processes across an extraordinary range of provision types. Whether the organisation will be ready by August 2026 is an open question.

    For providers, the choice is whether to engage substantively with this consultation knowing that the broad architecture is set by legislation, or to focus energy on preparing for implementation. For Welsh ministers, the challenge is whether this genuinely lighter-touch, more coherent approach than England’s increasingly discredited OfS regime can be delivered without compromising quality or institutional autonomy.

    And for students – especially those whose representative structures were hoping for statutory backing – there’s a question about whether principles-based engagement without rights amounts to meaningful participation or regulatory box-ticking.

    In England, some observers will watch with interest to see whether Wales has found a way to regulate tertiary education proportionately and coherently. Others will see in these documents a reminder that unified systems, however well-intentioned, require enormous complexity to accommodate the genuine diversity of the sector. The consultation responses, due by 17 December, will expose which interpretation the Welsh sector favours.

    Source link

  • Why is regulation on disabled students so weak?

    Why is regulation on disabled students so weak?

    When I read university strategies, there tend to be three themes – teaching, research, and that stuff that underpins it.

    If I’m glancing through students’ union strategies, there’s almost always a version of voice, activities/opportunities, and that stuff that underpins it.

    And so it is also the case that when we think about higher education regulation in England, everything from the TEF to the Regulatory Framework tends to have a triangle too – there’s experience, outcomes and that other stuff.

    The problem is that the case of disabled students presents a bit of a problem for the design of the regulation.

    Whatever the current design or theory of change being deployed, the basic question that OfS asks providers to ask is – are disabled students’ outcomes worse than everyone else’s?

    The underpinning theory is that if they are, that’s bound to be because their experience is worse. And if the experience was so poor as to be unlawful, that would definitely show up in outcomes.

    But what if, despite the experience being considerably (and often unlawfully) worse, the outcomes are broadly comparable – or even better? Where does that leave regulation that tends to start with outcomes and work backwards, rather than start with experience and then feed forwards?

    A new brief

    The Office for Students (OfS) has published new research that seems to show that disabled students are increasingly dissatisfied with their university experience even as their degree outcomes improve.

    The regulator has released two documents – a new insight brief examining equality of opportunity for disabled students, and commissioned research from Savanta exploring how 150 students experienced applying for reasonable adjustments.

    The publications come via work from the OfS Disability in Higher Education Advisory Panel, which was established in April 2024 to improve disabled students’ experiences and provide expert guidance.

    The latest data reveals an interesting pattern. For full-time undergraduates with reported disabilities, continuation rates are now 1.1 percentage points higher than for non-disabled peers – and attainment rates are 2.0 percentage points higher. That’s a significant shift from 2019 when disabled students lagged behind on both measures.

    It’s worth saying that, albeit on a smaller N, part-time undergraduates and degree apprentices tell a different story. Part-time disabled students have completion rates 13.0 percentage points lower than their non-disabled peers whilst degree apprentices show a 5.0 percentage point gap in attainment. These gaps suggest that not all disabled students are benefiting equally from institutional support.

    But back on full-time students, when it comes to experience, National Student Survey (NSS) results paint a very different picture. Disabled students consistently report lower satisfaction across all seven themes measured by the survey, and the gaps have grown over the past two years.

    The difference in satisfaction with organisation and management has widened from 6.5 percentage points in 2023 to 7.5 percentage points in 2025. Assessment and feedback satisfaction gaps have grown from 2.5 to 3.7 percentage points over the same period.

    Complaints to the Office of the Independent Adjudicator (OIA) tell a similar story. Disabled students now represent over 40 per cent of OIA complaints, up from around one-third in 2023. More significantly, a higher proportion of disabled students’ complaints are being upheld, suggesting some universities are failing to meet their legal obligations.

    Six years on

    The insight brief isn’t OfS’ first disabled students insight rodeo. 2019’s Insight brief asked whether universities were doing enough for disabled students. It contained a prescient observation:

    “Many disabled students are achieving despite the barriers which remain in their way, not because these barriers have been entirely removed.

    Over time, the disabled student population has grown substantially. In 2017, 13.2 per cent of students reported a disability. By 2023-24, this had risen to 19.9 per cent of full-time undergraduates and 24.6 per cent of part-time undergraduates. Mental health conditions have driven much of this increase, growing from 0.6 per cent of all students in 2010 to representing a significant proportion of disabled students today.

    2019 focused heavily on the social model of disability and questioned whether universities had truly embedded inclusive practices into their institutional structures. It noted that whilst many providers claimed to follow the social model, in practice they still treated disabled students as problems to be solved rather than addressing environmental barriers.

    2025’s brief takes a more pragmatic approach. Rather than debating models of disability, it provides a checklist of specific actions universities should take on experience that draws on the new evidence sources – including workshops with 105 university representatives and the Savanta research to understand both student experiences and institutional challenges.

    You could call it a statement of expectations, although OfS doesn’t quite go that far.

    The Savanta research found that 43 per cent of disabled students had applications for reasonable adjustments fully or partially rejected. Of those students whose needs were not fully met, 91 per cent took further action such as seeking advice or lodging complaints. This level of self-advocacy suggests that students are fighting for support rather than receiving it as a matter of course.

    The research also revealed significant differences between mature and younger students. Mature students were much more likely to take proactive steps when their support was inadequate, with 53 per cent following up or escalating concerns compared with 31 per cent of younger students. Success appears to depend partly on students’ ability to work the system rather than the system working for students.

    Implementation delays are another indicator that students are succeeding despite rather than because of support arrangements. Over half of students who received positive application outcomes waited five weeks or longer for support to be implemented. Students with three or more health conditions faced even longer waits, with 73 per cent waiting five weeks or more for exam adjustments compared with 45 per cent of students with fewer conditions.

    Workshops with university representatives showed that only 15.2 per cent of institutions have established processes for systematically evaluating whether reasonable adjustments are effective. That suggests most universities are not learning from experience or improving their support based on evidence of what works. Students are therefore navigating systems that are not designed to continuously improve.

    And the National Student Survey data on organisation and management is particularly telling. This theme, which includes questions about whether the course is well organised and running smoothly and whether the timetable works efficiently, shows the largest gap between disabled and non-disabled students at 7.5 percentage points. If disabled students are achieving good academic outcomes whilst rating organisational aspects poorly, they must be compensating for institutional failings through extra effort.

    Disabled Students UK’s 2024 research reinforces this picture. It found that only 38 per cent of disabled students who declared their disability reported having the support they need to access studies on equal terms with non-disabled peers. It also noted that most disabled students hold back from raising access issues with their university, suggesting they are managing barriers independently rather than relying on institutional support.

    And the OIA’s annual reports note that disabled students are overrepresented in complaints and that events occurring because a student is disabled are likely to have significant and lasting impacts. The 2024 report specifically highlighted complaints about implementation of support and reasonable adjustments to teaching and assessment. If support systems were working effectively, disabled students wouldn’t need to resort to formal complaints at such high rates.

    The brief reminds readers that the Equality of Opportunity Risk Register now explicitly identifies being disabled as a characteristic indicating risk to student success, and reminds that Access and Participation Plans must address gaps in disabled students’ outcomes with specific targets – and that OfS then monitors progress against these commitments.

    But there’s a problem. Providers would have to pick those risks, and pick disabled students.

    We (don’t) have a plan

    If we look across 99 now published Access and Participation Plans for universities, 27 providers have no disability targets whatsoever across any stage of the student lifecycle including widening access.

    Then if we isolate targets related to experience (ie we ignore access), thirty-five providers have set no targets for disabled students in the continuation, completion, attainment or progression stages. This means over one-third of institutions have no measurable goals for improving outcomes once disabled students arrive on campus.

    Most that do have a target don’t have them in all three of the experience measures. And even those that have targets often have them for a subset of disabled students where the disability type suggests a gap.

    If we assume that providers have been reasonable in not selecting disabled students and/or the risks in the EORR associated with disabled students, it’s a design problem. For a start, when an issue is spread thinly across providers and you have a provider-based regulatory system, you don’t get detailed plans in large parts of the long tail – and so the actions are absent.

    But that’s not the only problem. If we then turn to what providers say they do or are promising to do and look at the aspects of OfS’ checklist that directly relate to student experience, just 39 discuss a process for students to raise issues if support isn’t meeting needs or isn’t implemented properly, and none of the others (working with and listening to disabled students, communication about reasonable adjustments, sharing information about adjustments across the institution and ensuring teaching and assessments are accessible for disabled students while maintaining rigour) go above 60.

    Even then, we tend to see descriptions of existing activity and service provision rather than a new and properly resourced intervention. After all, who’s going to put in their plan that new for this cycle is that provider complying with the law?

    Imagine if the design worked the other way. OfS – as it did with Harassment and Sexual Misconduct (first with a Statement of Expectations, then through a formal Regulatory Condition) – sets out expectations. Then through polling (or ideally, an NSS extension, again a la H&SM) determines whether students are experiencing those expectations. Then it can take both system-wide and provider-level action.

    That – as is also the case with Harassment and Sexual Misconduct – might all lead to better outcomes, it might not. But those design flaws mean that for plans to be made and action to be monitored to secure students’ basic legal rights over their HE, there have to be a decent number of disabled students at their provider, and they have to be failing. If not, no promised action.

    Checklists and ticked boxes

    Overall, we’re left with a checklist – one that represents a pragmatic attempt to provide universities with clear guidance about what they should be doing to support disabled students. The questions about personalisation, implementation, communication, information-sharing, complaints processes, evaluation and accessible assessment all address real problems identified in the research.

    But that checklist’s weaknesses reflect a broader challenge in OfS regulation of experience. The questions are framed as prompts for institutional reflection rather than as requirements with clear standards. That approach may encourage tonal buy-in from universities, but it risks allowing institutions to tick boxes without making meaningful changes. And that’s if they even download the PDF.

    The checklist doesn’t specify what good looks like in any of the areas. It doesn’t set expectations about response times, explain what effective information-sharing systems should include, or define what routine evaluation means in practice. The lack of specificity makes it difficult for institutions to know whether they are meeting expectations, or for OfS to hold them accountable.

    Nor does the checklist address the resource constraints that universities identified as barriers to supporting disabled students effectively. The workshops noted that more students are reporting disabilities, that many have complex support needs and that institutions face staff shortages and stretched budgets.

    Unlike on H&SM – where OfS says “afford this detail or don’t provide HE” – the checklist acknowledges none of the challenges nor provides guidance about how universities should prioritise support when resources are limited.

    As usual on disability, no teeth are being bared here – a list of questions to muse on, rather than requirements to meet, and no consequences for those that fail.

    To be fair, the brief notes that students can make internal complaints, complain to the OIA or take their university to court. But as OfS CEO Susan Lapworth herself said about students in general – let alone disabled students – back in 2019:

    We should… consider whether a model that relies primarily on individual students challenging a provider for a breach of contract places a burden on students in an undesirable way.

    As I say, the checklist is a useful starting point for institutional self-reflection. But without clearer standards, stronger accountability mechanisms and recognition of the resource challenges universities face, it is unlikely to transform disabled students’ experiences, and is more likely to be just another PDF whose link I look up in a few years time in another article like this.

    And crucially, the evidence suggests that plenty of disabled students will continue to succeed despite, rather than because, laws that are supposed to achieve equality.

    Source link

  • Transparency Now or Regulation Later

    Transparency Now or Regulation Later

    Doctors predicted Wayne Frederick, the president of Howard University, wouldn’t live past 8. Now he’s 54. Frederick came to the U.S. from Trinidad and Tobago with a dream of finding a cure for his disease, sickle cell anemia, but detoured into higher ed administration.

    At an event hosted by the American Council on Education at Howard University this week, Frederick said CRISPR gene editing, a technology developed in academia, made his dream a reality. Finding cures to debilitating diseases is one of “the intangible things that higher ed does to change lives,” he said.

    Higher ed has changed lives in thousands of other ways; institutions are the largest employers in 10 states; colleges have helped regenerate many of America’s Rust Belt centers. Higher education is undeniably a public good. But as concerns grow about the affordability of college, do Americans care?

    In the ACE event’s discussion about the economic impact of higher ed, Alex Ricci, president of the National Council on Higher Education Resources, pointed out that despite college’s role in local and regional economies, the debate about the value of higher ed comes down to whether one thinks the benefit to the individual is greater than to society as a whole. “Many colleges and universities see themselves as a benefit shared broadly by society. Most Americans—especially those carrying thousands of dollars in student loan debt—see it as a transaction where the individual is the primary beneficiary or victim, depending on the student’s long-term outcomes,” he said.

    Regardless of whether you think higher ed is a public or private good, institutions are losing the value debate. In recorded remarks for the discussion, Representative Burgess Owens, a Utah Republican, chairman of the House subcommittee on higher education and workforce development, said, “Higher education should be about value, not just prestige.” He also presided over the “No More Surprises: Reforming College Pricing for Students and Families” hearing last month where lawmakers examined ways to make college costs more transparent.

    The lack of transparency on the cost of college can be life-altering for students and poses existential risks for colleges. Inside Higher Ed’s 2025 student survey found that three-fourths of the 5,000 respondents encountered some surprises in the cost of their education. These surprises can derail education journeys. One in five students said that an unexpected expense of $500 to $1,000 would threaten their ability to persist. Bad surprises also harm colleges: Students say that the lack of affordability is the biggest driver of declining public trust in higher education.

    College cost transparency has been a government priority since the Obama administration, but never has public trust in higher ed been so low or institutions so vulnerable to government overreach. Republican lawmakers have seized on the problem of college affordability and cost transparency and are looking for bipartisan solutions. In May, Senator Chuck Grassley, a Republican from Iowa, introduced the Understanding the True Cost of College Act 2025, which calls for standardization of financial aid offers so students understand in simple terms what the direct costs, indirect costs and net price of college will be. Last month the Senate Committee on Health, Education, Labor and Pensions formally requested information from the sector on ways to improve transparency, lower costs and ensure a college degree is valuable to students.

    Some colleges sense the urgency of the moment and are taking action on affordability. More are offering free tuition to households earning as much as $200,000 a year. Last month Whitworth University made a radical decision to stop tuition discounting and decrease its annual sticker price from $54,000 to $26,900. At the same time, a recent study found that tuition discounting is on the rise among public four-year institutions. But tuition discounts create more confusion around the true cost of college.

    A reasonable question to ask is: Why are only 730 colleges members of the College Cost Transparency Initiative? If higher ed stakeholders wanted to win the value debate, they would listen to lawmakers—and students and their families—and act on affordability and cost transparency. Otherwise, policymakers will do it for them. By demonstrating their impact for individual students, colleges can make a compelling case for their broader societal value.

    Source link

  • What the saga of Oxford Business College tells us about regulation and franchising

    What the saga of Oxford Business College tells us about regulation and franchising

    One of the basic expectations of a system of regulation is consistency.

    It shouldn’t matter how prestigious you are, how rich you are, or how long you’ve been operating: if you are active in a regulated market then the same rules should apply to all.

    Regulatory overreach can happen when there is public outrage over elements of what is happening in that particular market. The pressure a government feels to “do something” can override processes and requirements – attempting to reach the “right” (political or PR) answer rather than the “correct” (according to the rules) one.

    So when courses at Oxford Business College were de-designated by the Secretary of State for Education, there’s more to the tale than a provider where legitimate questions had been raised about the student experience getting just desserts. It is a cautionary tale, involving a fascinating high-court judgment and some interesting arguments about the limits of ministerial power, of what happens when political will gets ahead of regulatory processes.

    Business matters

    A splash in The Sunday Times back in the spring concerned the quality of franchised provision from – as it turned out – four Office for Students registered providers taught at Oxford Business College. The story came alongside tough language from Secretary of State for Education Bridget Phillipson:

    I know people across this country, across the world, feel a fierce pride for our universities. I do too. That’s why I am so outraged by these reports, and why I am acting so swiftly and so strongly today to put this right.

    And she was in no way alone in feeling that way. Let’s remind ourselves, the allegations made in The Sunday Times were dreadful. Four million pounds in fraudulent loans. Fake students, and students with no apparent interest in studying. Non-existent entry criteria. And, as we shall see, that’s not even as bad as the allegations got.

    De-designation – removing the eligibility of students at a provider to apply for SLC fee or maintenance loans – is one of the few levers government has to address “low quality” provision at an unregistered provider. Designation comes automatically when a course is franchised from a registered provider: a loophole in the regulatory framework that has caused concern over a number of years. Technically an awarding provider is responsible for maintaining academic quality and standards for its students studying elsewhere.

    The Office for Students didn’t have any regulatory jurisdiction other than pursuing the awarding institutions. OBC had, in fact, tried to register with OfS – withdrawing the application in the teeth of the media firestorm at the end of March.

    So everything depended on the Department for Education overturning precedent.

    Ministering

    It is “one of the biggest financial scandals universities have faced.” That’s what Bridget Phillipson said when presented with The Sunday Times’ findings. She announced that the Public Sector Fraud Authority would coordinate immediate action, and promised to empower the Office for Students to act in such cases.

    In fact, OBC was already under investigation by the Government Internal Audit Agency (GIAA) and had been since 2024. DfE had been notified by the Student Loans Company about trends in the data and other information that might indicate fraud at various points between November 2023 and February 2024 – notifications that we now know were summarised as a report detailing the concerns which was sent to DfE in January 2024. The eventual High Court judgement (the details of which we will get to shortly) outlined just a few of these allegations, which I take from from the court documents:

    • Students enrolled in the Business Management BA (Hons) course did not have basic English language skills.
    • Less than 50 per cent of students enrolled in the London campus participate, and the remainder instead pay staff to record them as in attendance.
    • Students have had bank details altered or new bank accounts opened in their name, to which their maintenance payments were redirected.
    • Staff are encouraging fraud through fake documents sent to SLC, fake diplomas, and fake references. Staff are charging students to draft their UCAS applications and personal statements. Senior staff are aware of this and are uninterested.
    • Students attending OBC do not live in the country. In one instance, a dead student was kept on the attendance list.
    • Students were receiving threats from agents demanding money and, if the students complained, their complaints were often dealt with by those same agents threatening the students.
    • Remote utilities were being used for English language tests where computers were controlled remotely to respond to the questions on behalf of prospective students.
    • At the Nottingham campus, employees and others were demanding money from students for assignments and to mark their attendance to avoid being kicked off their course.

    At the instigation of DfE, and with the cooperation of OBC, GIAA started its investigation on 19 September 2024, continuing to request information from and correspond with the college until 17 January 2025. An “interim report” detailing emerging findings went to DfE on 17 December 2024; the final report arrived on 30 January 2025. The final report made numerous recommendations about OBC processes and policies, but did not recommend de-designation. That recommendation came in a ministerial submission, prepared by civil servants, dated 18 March 2025.

    Process story

    OBC didn’t get sight of these reports until 20 March 2025, after the decisions were made. It got summaries of both the interim and final reports in a letter from DfE notifying it that Phillipson was “minded to” de-designate. The documentation tells us that GIAA reported that OBC had:

    • recruited students without the required experience and qualifications to successfully complete their courses
    • failed to ensure students met the English language proficiency as set out in OBC and lead provider policies
    • failed to ensure attendance is managed effectively
    • failed to withdraw or suspend students that fell below the required thresholds for performance and/or engagement;
    • failed to provide evidence that immigration documents, where required, are being adequately verified.

    The college had 14 days to respond to the summary and provide factual comment for consideration, during which period The Sunday Times published its story. OBC asked DfE for the underlying material that informed the findings and the subsequent decision, and for an extension (it didn’t get all the material, but it got a further five days) – and it submitted 68 pages of argument and evidence to DfE, on 7 April 2025. Another departmental ministerial submission (on 16 April 2025) recommended that the Secretary of State confirm the decision to de-designate.

    According to the OBC legal team, these emerging findings were not backed up by the full GIAA reports, and there were concerns about the way a small student sample had been used to generalise across an entire college. Most concerningly, the reports as eventually shared with the college did not support de-designation (though they supported a number of other concerns about OBC and its admission process). This was supported by a note from GIAA regarding OBC’s submission, which – although conceding that aspects of the report could have been expressed more clearly – concluded:

    The majority of the issues raised relate to interpretation rather than factual accuracy. Crucially, we are satisfied that none of the concerns identified have a material impact on our findings, conclusions or overall assessment.

    Phillipson’s decision to de-designate was sent to the college on 17 April 2025, and it was published as a Written Ministerial Statement. Importantly, in her letter, she noted that:

    The Secretary of State’s decisions have not been made solely on the basis of whether or not fraud has been detected. She has also addressed the issue of whether, on the balance of probabilities, the College has delivered these courses, particularly as regards the recruitment of students and the management of attendance, in such a way that gives her adequate assurance that the substantial amounts of public money it has received in respect of student fees, via its partners, have been managed to the standards she is entitled to expect.

    Appeal

    Oxford Business College appealed the Secretary of State’s decision. Four grounds of challenge were pursued with:

    • Ground 3: the Secretary of State had stepped beyond her powers in prohibiting OBC from receiving public funds from providing new franchised courses in the future.
    • Ground 1: the decision was procedurally unfair, with key materials used by the Secretary of State in making the decision not provided to the college, and the college never being told the criteria it was being assessed against
    • Ground 4: By de-designating courses, DfE breached OBCs rights under Article 1 of the First Protocol to the European Convention on Human Rights (to peaceful enjoyment of its possessions – in this case the courses themselves)
    • Ground 7: The decision by the Secretary of State had breached the public sector equality duty

    Of these, ground 3 was not determined, as the Secretary of State had clarified that no decision had been taken regarding future courses delivered by OBC. Ground 4 was deemed to be a “controversial” point of law regarding whether a course and its designation status could be a “possession” under ECHR, but could be proceeded with at a later date. Ground 7 was not decided.

    Ground 1 succeeded. The court found that OBC had been subject to an unfair process, where:

    OBC was prejudiced in its ability to understand and respond to the matters of the subject of investigation, including as to the appropriate sanction, and to understand the reasons for the decision.

    Judgement

    OBC itself, or the lawyers it engaged, have perhaps unwisely decided to put the judgement into the public domain – it has yet to be formally published. I say unwisely, because it also puts the initial allegations into the public domain and does not detail any meaningful rebuttal from the college – though The Telegraph has reported that the college now plans to sue the Secretary of State for “tens of millions of pounds.”

    The win, such as it is, was entirely procedural. The Secretary of State should have shared more detail of the findings of the GIAA investigation (at both “emerging” and “final” stages) in order that the college could make its own investigations and dispute any points of fact.

    Much of the judgement deals with the criteria by which a sample of 200 students were selected – OBC was not made aware that this was a sample comprising those “giving the greatest cause for suspicion” rather than a random sample, and the inability of OBC to identify students whose circumstances or behaviour were mentioned in the report. These were omissions, but nowhere is it argued by OBC that these were not real students with real experiences.

    Where allegations are made that students might be being threatened by agents and institutional staff, it is perhaps understandable that identifying details might be redacted – though DfE cited the “”pressure resulting from the attenuated timetable following the order for expedition, the evidence having been filed within 11 days of that order” for difficulties faced in redacting the report properly. On this point, DfE noted that OBC, using the materials provided, “had been able to make detailed representations running to 68 pages, which it had described as ‘comprehensive’ and which had been duly considered by the Secretary of State”.

    The Secretary of State, in evidence, rolled back from the idea that she could automatically de-designate future courses without specific reason, but this does not change the decisions she has made about the five existing courses delivered in partnership. Neither does it change the fact that OBC, having had five courses forcibly de-designated, and seen the specifics of the allegations underpinning this exceptional decision put into the public domain without any meaningful rebuttal, may struggle to find willing academic partners.

    The other chink of legal light came with an argument that a contract (or subcontract) could be deemed a “possession” under certain circumstances, and that article one section one of the European Convention on Human Rights permits the free enjoyment of possessions. The judgement admits that there could be grounds for debate here, but that debate has not yet happened.

    Rules

    Whatever your feelings about OBC, or franchising in general, the way in which DfE appears to have used a carefully redacted and summarised report to remove an institution from the sector is concerning. If the rules of the market permit behaviour that ministers do not like, then these rules need to be re-written. DfE can’t just regulate based on what it thinks the rules should be.

    The college issued a statement on 25 August, three days after the judgement was published – it claims to be engaging with “partner institutions” (named as Buckinghamshire New University, University of West London, Ravensbourne University London, and New College Durham – though all four had already ended their partnerships with the remaining students being “taught out”) about the future of the students affected by the designation decision – many had already transferred to other courses at other providers.

    In fact, the judgement tells us that of 5,000 students registered at OBC on 17 April 2025, around 4,700 had either withdrawn or transferred out of OBC to be taught out. We also learn that 1,500 new students, who had planned to start an OBC-delivered course after 2025, would no longer be doing so. Four lead providers had given notice to terminate franchise agreements between April 2024 and May of 2025. Franchise discussions with another provider – Southampton Solent University – underway shortly before the decision to de-designate, had ended.

    OBC currently offers one course itself (no partnership offers are listed) – a foundation programme covering academic skills and English language including specialisms in law, engineering, and business – which is designed to prepare students for the first year of an undergraduate degree course. It is not clear what award this course leads to, or how it is regulated. It is also expensive – a 6 month version (requiring IELTS 5.5 or above) costs an eyewatering £17,500. And there is no information as to how students might enroll on this course.

    OBC’s statement about the court case indicates that it “rigorously adheres to all regulatory requirements”, but it is not clear which (if any) regulator has jurisdiction over the one course it currently advertises.

    If there are concerns about the quality of teaching, or about academic standards, in any provider in receipt of public funds they clearly need to be addressed – and this is as true for Oxford Business College as it is for the University of Oxford. This should start with a clear plan for quality assurance (ideally one that reflects the current concerns of students) and a watertight process that can be used both to drive compliance and take action against those who don’t measure up. Ministerial legal innovation, it seems, doesn’t quite cut it.

    Source link

  • Is there a place for LEO in regulation?

    Is there a place for LEO in regulation?

    The OfS have, following a DfE study, recently announced a desire to use LEO for regulation. In my view this is a bad idea.

    Don’t get me wrong, the Longitudinal Outcomes from Education (LEO) dataset is a fantastic and under-utilised tool for historical research. Nothing can compare to LEO for its rigour, coverage and the richness of the personal data it contains.

    However, it has serious limitations, it captures earnings and not salary, for everyone who chooses to work part time it will seriously underestimate the salary they command.

    And fundamentally it’s just too lagged. You can add other concerns around those choosing not to work and those working abroad if you wish to undermine its utility further.

    The big idea

    The OfS is proposing using data from 3 years’ after graduation which I assume to mean the third full tax year after graduation although it could mean something different, no details are provided. Assuming that my interpretation is correct the most recent LEO data published in June this year relates to the 2022-23 tax year so for that to be the third full tax year after graduation (that’s the that’s the 2018-19 graduating cohort, and even if you go for the third tax year including the one they graduated in it’s the 2019-20 graduates). The OfS also proposes to continue to use 4 year aggregates which makes a lot of sense to avoid statistical noise and deal with small cohorts but it does mean that some of the data will relate to even earlier cohorts.

    The problem is therefore if the proposed regime had been in place this year the OfS would have just got its first look at outcomes from the 2018-19 graduating cohort who were of course entrants in 2016-17 or earlier. When we look at it through this lens it is hard to see how one applies any serious regulatory tools to a provider failing on this metric but performing well on others especially if they are performing well on those based on the still lagged but more timely Graduate Outcomes survey.

    It is hard to conceive of any courses that will not have had at least one significant change in the 9 (up to 12!) years since the measured cohort entered. It therefore won’t be hard for most providers to argue that the changes they have made since those cohorts entered will have had positive impacts on outcomes and the regulator will have to give some weight to those arguments especially if they are supported by changes in the existing progression, or the proposed new skills utilisation indicator.

    A problem?

    And if the existing progression indicator is problematic then why didn’t the regulator act on it when it had it four years earlier? The OfS could try to argue that it’s a different indicator capturing a different aspect of success but this, at least to this commentators mind, is a pretty flimsy argument and is likely to fail because earnings is a very narrow definition of success. Indeed, by having two indicators the regulator may well find themselves in a situation where they can only take meaningful action if a provider is failing on both.

    OfS could begin to address the time lag by just looking at the first full tax year after graduation but this will undoubtedly be problematic as graduates take time to settle into careers (which is why GO is at 15 months) and of course the interim study issues will be far more significant for this cohort. It would also still be less timely than the Graduate Outcomes survey which itself collects the far more meaningful salary rather than earnings.

    There is of course a further issue with LEO in that it will forever be a black box for the providers being regulated using it. It will not be possible to share the sort of rich data with providers that is shared for other metrics meaning that providers will not be able to undertake any serious analysis into the causes of any concerns the OfS may raise. For example, a provider would struggle to attribute poor outcomes to a course they discontinued, perhaps because they felt it didn’t speak to the employment market. A cynic might even conclude that having a metric nobody can understand or challenge is quite nice for the OfS.

    The use of LEO in regulation is likely to generate a lot of work for the OfS and may trigger lots of debate but I doubt it will ever lead to serious negative consequences as the contextual factors and the fact that the cohorts being considered are ancient history will dull, if not completely blunt, the regulatory tools.

    Richard Puttock writes in a personal capacity.

    Source link

  • Testing Times & Interesting Discussions

    Testing Times & Interesting Discussions

    Last week, The Royal Bank of Canada (RBC) put out a discussion paper called Testing Times: Fending Off A Crisis in Post-Secondary Education, which in part is the outcome of a set of cross-country discussions held this summer by RBC, HESA, and the Business Higher Education Roundtable. (BHER). The paper, I think, sums up the current situation pretty well: the system is not at a starvation point but is heading in that direction pretty quickly and that needs to be rectified. On the other hand, there are some ways that institutions could be moving more quickly to respond to changing social and economic circumstances. What’s great about this paper is that it balances those two ideas pretty effectively.

    I urge everyone to read it themselves because I think it sums up a lot of issues nicely – many of which we at HESA will be taking up at our Re: University conference in January (stay tuned! the nearly full conference line-up will be out in a couple of weeks, and it’s pretty exciting). But I want to draw everyone’s attention to section 4 of the report, in particular which I think is the sleeper issue of the year, and that is the regulation of post-secondary institutions. One of the things we heard a lot on the road was how universities were being hamstrung – not just by governments but by professional regulatory bodies – in terms of developing innovative programming. This is a subject I’ll return to in the next week or two, but I am really glad that this issue might be starting to get some real traction.

    The timing of this release wasn’t accidental: it came just a few days before BHER had one of its annual high-level shindigs, and RBC’s CEO Dave MacKay is also BHER’s Board Chair, so the two go hand-in-hand to some extent. I was at the summit on Monday – a Chatham House rules session at RBC headquarters – which attracted a good number of university and college presidents, as well as CEOs – entitled Strategic Summit on Talent, Technology and a New Economic Order. The discussions took up the challenge in the RBC paper to look at where the country is going and where the post-secondary education sector can contribute to making a new and stronger Canada.

    And boy, was it interesting.

    I mean, partly it was some of the outright protectionist stuff being advocated by the corporate sector in the room. I haven’t heard stuff like that since I was a child. Basically, the sentiment in the room is that the World Trade Organization (WTO) is dead, the Americans aren’t playing by those rules anymore, so why should we? Security of supply > low-cost supply. Personally, I think that likely means that this “new economic order” is going to mean much more expensive wholesale prices, but hey, if that’s what we have to adapt to, that’s what we have to adapt to.

    But, more pertinent to this blog were the ways the session dealt with the issue of what in higher education needs to change to meet the moment. And, for me, what was interesting was that once you get a group of business folks in a room and ask what higher education can do to help get the country on track, they actually don’t have much to say. They will talk a LOT about what government can do to help get the country on track. The stories they can tell about how much more ponderous and anti-innovation Canadian public procurement policies are compared to almost any other jurisdiction on earth would be entertaining if the implications were not so horrific. They will talk a LOT about how Canadian C-suites are risk-averse, almost as risk-averse as government, and how disappointing that is.

    But when it comes to higher education? They don’t actually have all that much to say. And that’s both good and bad.

    Now before I delve into this, let me say that it’s always a bit tricky to generalize what a sector believes based on a small group of CEOs who get drafted into a room like this one. I mean, to some degree these CEOs are there because they are interested in post-secondary education, so they aren’t necessarily very representative of the sector. But here’s what I learned:

    • CEOs are a bit ruffled by current underfunding of higher education. Not necessarily to the point where they would put any of their own political capital on the line, but they are sympathetic to institutions.
    • When they think about how higher education affects their business, CEOs seem to think primarily about human capital (i.e. graduates). They talk a lot less about research, which is mostly what universities want to talk about, so there is a bit of a mismatch there.
    • When they think about human capital, what they are usually thinking about is “can my business have access to skills at a price I want to pay?” Because the invitees are usually heads of successful fast-growing companies, the answer is usually no. Also, most say what they want are “skills” – something they, not unreasonably, equate with experience, which sets up another set of potential misunderstandings with universities because degrees ≠ experience (but it does mean everyone can agree on more work-integrated learning).
    • As a result – and this is important here – it’s best if CEOs think about post-secondary education in terms of firm growth, not in terms of economy-wide innovation.

    Now, maybe that’s all right and proper – after all, isn’t it government’s business to look after the economy-wide stuff? Well, maybe, but here’s where it gets interesting. You can drive innovation either by encouraging the manufacture and circulation of ideas (i.e. research) or by diffusing skills through the economy (i.e. education/training). But our federal government seems to think that innovation only happens via the introduction of new products/technology (i.e., the product of research), and that to the extent there is an issue with post-secondary education, it is that university-based research doesn’t translate into new products fast enough – i.e. the issue is research commercialization. The idea that technological adoption might be the product of governments and firms not having enough people to use new technologies properly (e.g. artificial intelligence)? Not on anyone’s radar screen.

    And that really is a problem. One I am not sure is easily fixed because I am not sure everyone realizes the degree to which they are talking past each other. But that said, the event was a promising one. It was good to be in a space where so many people cared about Canada, about innovation, and about post-secondary education. And the event itself – very well pulled-off by RBC and BHER – made people want to keep discussing higher education and the economy. Both business and higher education need to have events like this one, regularly, and not just nationally but locally as well. The two sides don’t know each other especially well, and yet their being more in sync is one of the things that could make the country work a lot better than it does. Let’s keep talking.

    Source link

  • Government AI regulation could censor protected speech online

    Government AI regulation could censor protected speech online

    Edan Kauer is a former FIRE intern and a sophomore at Georgetown University.


    Elliston Berry was just 14 years old when a male classmate at Aledo High in North Texas used AI to create fake nudes of her based on images he took from her social media. He then did the same to seven other girls at the school and shared the images on Snapchat. 

    Now, two years later, Berry and her classmates are the inspiration for Senator Ted Cruz’s Take It Down Act (TIDA), a recently enacted law which gives social media platforms 48 hours to remove “revenge porn” once reported. The bill considers any non-consensual intimate imagery (NCII), including AI deepfakes, to fall under this category. But despite the law’s noble intentions, its dangerously vague wording is a threat to free speech.

    This law, which covers both adults and minors, makes it illegal to publish an image of an identifiable minor that meets the definition of “intimate visual depiction,” which is defined as certain explicit nudity or sexual conduct,  with intent to “arouse or gratify the sexual desire of any person” or “abuse, humiliate, harass, or degrade the minor.” 

    Artificial intelligence, free speech, and the First Amendment

    FIRE offers an analysis of frequently asked questions about artificial intelligence and its possible implications for free speech and the First Amendment.


    Read More

    That may sound like a no-brainer, but deciding what content this text actually covers, including what counts as “arousing,” “humiliating,” or “degrading” is highly subjective. This law risks chilling protected digital expression, prompting  social media platforms  to censor harmless content like a family beach photo, sports team picture, or images of injuries or scars to avoid legal penalties or respond to bad-faith reports.

    Civil liberties groups such as the Electronic Frontier Foundation (EFF) have noted that the language of the law itself raises censorship concerns because it’s vague and therefore easily exploited:

    Take It Down creates a far broader internet censorship regime than the Digital Millennium Copyright Act (DMCA), which has been widely abused to censor legitimate speech. But at least the DMCA has an anti-abuse provision and protects services from copyright claims should they comply. This bill contains none of those minimal speech protections and essentially greenlights misuse of its takedown regime … Congress should focus on enforcing and improving these existing protections, rather than opting for a broad takedown regime that is bound to be abused. Private platforms can play a part as well, improving reporting and evidence collection systems. 

    Nor does the law cover the possibility of people filing bad-faith reports.

    In the 2002 case Ashcroft v. Free Speech Coalitionthe Court said the language of the Child Pornography Protection Act (CPPA) was so broad that it could have been used to censor protected speech. Congress passed the CPPA to combat the circulation of computer-generated child pornography, but as Justice Anthony Kennedy explained in the majority opinion, the language of the CPPA could be used to censor material that seems to depict child pornography without actually doing so.

    While we must acknowledge that online exploitation is a very real issue, we cannot solve the problem at the expense of other liberties.

    Also in 2002, the Supreme Court heard the case Ashcroft v. ACLU, which came about after Congress passed the Child Online Protection Act (COPA) to prevent minors from accessing adult content online. But again, due to the broad language of the bill, the Court found this law would restrict adults who are within their First Amendment rights to access mature content.

    As with the Take It Down Act, here too were laws created to protect children from sexual exploitation online, yet established using vague and overly broad standards that threaten protected speech.

    But unfortunately, stories like the one at Aledo High are becoming more common as AI becomes more accessible. Last year, boys at Westfield High School in New Jersey used AI to circulate fake nudes of Francesca Mani, who is 14 years old, and other girls in her class. But Westfield High administrators were caught off guard as they had never experienced this type of incident. Although the Westfield police were notified and the perpetrators were suspended for up to 2 days, parents criticized the school for their weak response. 

    So to Speak podcast: ‘Robotica: Speech Rights & Artificial Intelligence’

    A year later, the school district developed a comprehensive AI policy and amended their bullying policy to cover harassment carried out through “electronic communication” which includes “the use of electronic means to harass, intimidate, or bully including the use of artificial intelligence “AI” technology.” What’s true for Westfield High is true for America — existing laws are often more than adequate to deal with emerging tech issues. By classifying AI material under electronic communication as a category of bullying, Westfield High demonstrates that the creation of new AI policies are redundant. On a national scale, the same can be said for classifying and prosecuting instances of child abuse online.

    While we must acknowledge that online exploitation is a very real issue, we cannot solve the problem at the expense of other liberties. Once we grant the government the power to silence the voices we find distasteful, we open the door to censorship. Though it is essential to address the very real harms of emerging AI technology, we must also keep our First Amendment rights intact.

    Source link

  • TEF6: the incredible machine takes over quality assurance regulation

    TEF6: the incredible machine takes over quality assurance regulation

    If you loved the Teaching Excellence Framework, were thrilled by the outcomes (B3) thresholds, lost your mind for the Equality of Opportunity Risk Register, and delighted to the sporadic risk-based OfS investigations based on years-old data you’ll find a lot to love in the latest set of Office for Students proposals on quality assurance.

    In today’s Consultation on the future approach to quality regulation you’ll find a cyclical, cohort based TEF that also includes a measurement (against benchmarks) of compliance with the thresholds for student outcomes inscribed in the B3 condition. Based on the outcomes of this super-TEF and prioritised based on assessment of risk, OfS will make interventions (including controls on recruitment and the conditions of degree awarding powers) and targeted investigation. This is a first stage consultation only, stage two will come in August 2026.

    It’s not quite a grand unified theory: we don’t mix in the rest of the B conditions (covering less pressing matters like academic standards, the academic experience, student support, assessment) because, in the words of OfS:

    Such an approach would be likely to involve visits to all providers, to assess whether they meet all the relevant B conditions of registration

    The students who are struggling right now with the impacts of higher student/staff ratios and a lack of capacity due to over-recruitment will greatly appreciate this reduction in administrative burden.

    Where we left things

    When we last considered TEF we were expecting an exercise every four years, drawing on provider narrative submissions (which included a chunk on a provider’s own definition and measurement of educational gain), students’ union narrative submissions, and data on outcomes and student satisfactions. Providers were awarded a “medal” for each of student outcomes and student experience – a matrix determined whether this resulted in an overall Bronze, Silver, Gold or Requires Improvement.

    The first three of these awards were deemed to be above minimum standards (with slight differences between each), while the latter was a portal to the much more punitive world of regulation under group B (student experience) conditions of registration. Most of the good bits of this approach came from the genuinely superb Pearce Review of TEF conducted under section 26 of the Higher Education and Research Act, which fixed a lot of the statistical and process nonsense that had crept in under previous iterations and then-current plans (though not every recommendation was implemented).

    TEF awards were last made in 2023, with the next iteration – involving all registered providers plus anyone else who wanted to play along – was due in 2027.

    Perma-TEF

    A return to a rolling TEF rather than a quadrennial quality enhancement jamboree means a pool of TEF assessors rather than a one-off panel. There will be steps taken to ensure that an appropriate group of academic and student assessors is selected to assess each cohort – there will be special efforts made to use those with experience of smaller, specialist, and college-based providers – and a tenure of two-to-three years is planned. OfS is also considering whether its staff can be included among the storied ranks of those empowered to facilitate ratings decisions.

    Likewise, we’ll need a more established appeals system. Open only to those with Bronze or Needs Improvement ratings (Gold and Silver are passing grades) it would be a way to potentially forestall engagement and investigations based on an active risk to student experience or outcomes, or a risk of a future breach of a condition of registration for Bronze or Requires Improvement.

    Each provider would be assessed once every three years – all providers taking part in the first cycle would be assessed in either 2027-28, 2028-29, or 2029-30 (which covers only undergraduate students because there’s no postgraduate NSS yet – OfS plan to develop one before 2030). In many cases they’ll only know which one at the start of the academic year in question, which will give them six months to get their submissions sorted.

    Because Bronze is now bad (rather than “good but not great” as it used to be) the first year’s could well include all providers with a 2023 Bronze (or Requires Improvement) rating, plus some with increased risks of non-compliance, some with Bronze in one of the TEF aspects, and some without a rating.

    After this, how often you are assessed depends on your rating – if you are Gold overall it is five years till the next try, Silver means four years, and Bronze three (if you are “Requires Improvement” you probably have other concerns beyond the date of your next assessment) but this can be tweaked if OfS decides there is an increased risk to quality or for any other reason.

    Snakes and ladders

    Ignore the gradations and matrices in the Pearce Review – the plan now is that your lowest TEF aspect rating (remember you got sub-awards last time for student experience and student outcomes) will be your overall rating. So Silver for experience and Bronze for outcomes makes for an overall Bronze. As OfS has decided that you now have to pay (likely around £25,000) to enter what is a compulsory exercise this is a cost that could lead to a larger cost in future.

    In previous TEFs, the only negative consequence for those outside of the top ratings have been reputational – a loss of bragging rights of, arguably, negligible value. The new proposals align Bronze with the (B3) minimum required standards and put Requires Improvement below these: in the new calculus of value the minimum is not good enough and there will be consequences.

    We’ve already had some hints that a link to fee cap levels is back on the cards, but in the meantime OfS is pondering a cap on student numbers expansion to punish those who turn out Bronze or Requires Improvement. The workings of the expansion cap will be familiar to those who recall the old additional student numbers process – increases of more than five per cent (the old tolerance band, which is still a lot) would not be permitted for poorly rated providers.

    For providers without degree awarding powers it is unlikely they will be successful in applying for them with Bronze and below – but OfS is also thinking about restricting aspects of existing providers DAPs, for example limiting their ability to subcontract or franchise provision in future. This is another de facto numbers cap in many cases, and is all ahead of a future consultation on DAPs that could make for an even closer link with TEF.

    Proposals for progression

    Proposal 6 will simplify the existing B3 thresholds, and integrate the way they are assessed into the TEF process. In a nutshell, the progression requirement for B3 would disappear – with the assessment made purely on continuation and completion, with providers able to submit contextual and historic information to explain why performance is not above the benchmark or threshold as a part of the TEF process.

    Progression will still be considered at the higher levels of TEF, and here contextual information can play more of a part – with what I propose we start calling the Norland Clause allowing providers to submit details of courses that lead to jobs that ONS does not consider as professional or managerial. That existing indicator will be joined by another based on (Graduate Outcomes) graduate reflections on how they are using what they have learned, and benchmarked salaries three years after graduation from DfE’s Longitudinal Educational Outcomes (LEO) data – in deference to that random Kemi Badenoch IFS commission at the tail end of the last parliament.

    Again, there will be contextual benchmarks for these measures (and hopefully some hefty caveating on the use of LEO median salaries) – and, as is the pattern in this consultation, there are detailed proposals to follow.

    Marginal gains, marginal losses

    The “educational gains” experiment, pioneered in the last TEF, is over: making this three times that a regulator in England has tried and failed to include a measure of learning gain in some form of regulation. OfS is still happy for you to mention your education gain work in your next narrative submission, but it isn’t compulsory. The reason: reducing burden, and a focus on comparability rather than a diversity of bespoke measures.

    Asking providers what something means in their context, rather than applying a one-size-fits-all measure of student success was an immensely powerful component of the last exercise. Providers who started on that journey at considerable expense in data gathering and analysis may be less than pleased at this latest development – and we’d certainly understood that DfE were fans of the approach too.

    Similarly, the requirement for students to feed back on students in their submissions to TEF has been removed. The ostensible reason is that students found it difficult last time round – the result is that insight from the valuable networks between existing students and their recently graduated peers is lost. The outcomes end of TEF is now very much data driven with only the chance to explain unusual results offered. It’s a retreat from some of the contextual sense that crept in with the Pearce Review.

    Business as usual

    Even though TEF now feels like it is everywhere and for always, there’s still a place for OfS’ regular risk-based monitoring – and annex I (yes, there’s that many annexes) contains a useful draft monitoring tool.

    Here it is very good to see staff:student ratios, falling entry requirements, a large growth in foundation year provision, and a rapid growth in numbers among what are noted as indicators of risk to the student experience. It is possible to examine an excellent system designed outside of the seemingly inviolate framework of the TEF where events like this would trigger an investigation of provider governance and quality assurance processes.

    Alas, the main use of this monitoring is to decide whether or not to bring a TEF assessment forward, something that punts an immediate risk to students into something that will be dealt with retrospectively. If I’m a student on a first year that has ballooned from 300 to 900 from one cycle to the next there is a lot of good a regulator can do by acting quickly – I am unlikely to care whether a Bronze or Silver award is made in a couple of years’ time.

    International principles

    One of the key recommendations of the Behan review on quality was a drawing together of the various disparate (and, yes, burdensome) streams of quality and standards assurance and enhancement into a unified whole. We obviously don’t quite get there – but there has been progress made towards another key sector bugbear that came up both in Behan and the Lords’ Industry and Regulators Committee review: adherence to international quality assurance standards (to facilitate international partnerships and, increasingly, recruitment).

    OfS will “work towards applying to join the European Quality Assurance Register for Higher Education” at the appropriate time – clearly feeling that the long overdue centring of the student voice in quality assurance (there will be an expanded role for and range of student assessors) and the incorporation of a cyclical element (to desk assessments at least) is enough to get them over the bar.

    It isn’t. Principle 2.1 of the EQAR ESG requires that “external quality assurance should address the effectiveness of the internal quality assurance processes” – philosophically establishing the key role of providers themselves in monitoring and upholding the quality of their own provision, with the external assurance process primarily assessing whether (and how well) this has been done. For whatever reason OfS believes the state (in the form of the regulator) needs to be (and is capable of being!) responsible for all, quality assurance everywhere, all the time. It’s a glaring weakness of the OfS system that urgently needs to be addressed. And it hasn’t been, this time.

    The upshot is that while the new system looks ESG-ish, it is unlikely to be judged to be in full compliance.

    Single word judgements

    The recent use of single headline judgements of educational quality being used in ways that have far reaching regulatory implications is hugely problematic. The government announced the abandonment of the old “requires improvement, inadequate, good, and outstanding” judgements for schools in favour of a more nuanced “report card approach” – driven in part by the death by suicide of headteacher Ruth Perry in 2023. The “inadequate” rating given to her Cavendish Primary School would have meant forced academisation and deeper regulatory oversight.

    Regulation and quality assurance in education needs to be rigorous and reliable – it also needs to be context-aware and focused on improvement rather than retribution. Giving single headline grades cute, Olympics-inspired names doesn’t really cut it – and as we approach the fifth redesign of an exercise that has only run six times since 2016 you would perhaps think that rather harder questions need to be asked about the value (and cost!) of this undertaking.

    If we want to assess and control the risks of modular provision, transnational education, rapid expansion, and a growing number of innovations in delivery we need providers as active partners in the process. If we want to let universities try new things we need to start from a position that we can trust universities to have a focus on the quality of the student experience that is robust and transparent. We are reaching the limits of the current approach. Bad actors will continue to get away with poor quality provision – students won’t see timely regulatory action to prevent this – and eventually someone is going to get hurt.

    Source link