The depressing thing about the contemporary debate on the quality of higher education in England is how limited it is.
From the outside, everything is about structures, systems, and enforcement: the regulator will root out “poor quality courses” (using data of some sort), students have access to an ombuds-style service in the Office for the Independent Adjudicator, the B3 and TEF arrangements mean that regulatory action will be taken. And so on.
The proposal on the table from the Office for Students at the moment doubles down on a bunch of lagging metrics (continuation, completion, progression) and one limited lagging measure of student satisfaction (NSS) underpinning a metastasised TEF that will direct plaudits or deploy increasingly painful interventions based on a single precious-metal scale.
All of these sound impressive, and may give your academic registrar sleepless nights – but none of them offer meaningful and timely redress to the student who has turned up for a 9am lecture to find that nobody has turned up to deliver it – again. Which is surely the point.
It is occasionally useful to remember how little this kind of visible sector level quality assurance systems have to do with actual quality assurance as experienced by students and others, so let’s look at how things currently work and break it down by need state.
I’m a student and I’m having a bad time right now
Continuation data and progression data published in 2025 reflects the experience of students who graduated between 2019 and 2022; completion data refers to cohorts between 2016 and 2019; the NSS reflects the opinions of final year students and is published the summer after they graduate. None of these contain any information about what is happening in labs, lecture theatres, and seminar rooms right now.
As students who have a bad experience in higher education don’t generally get the chance to try it again, any useful system of quality assurance needs to be able to help students in the moment – and the only realistic way that this can happen is via processes within a provider.
From the perspective of the student the most common of these are module feedback (the surveys conducted at the end of each unit of teaching) and the work of the student representative (a peer with the ability to feedback on behalf of students). Beyond this students have the ability to make internal complaints, ranging from a quiet word with the lecturer after the seminar to a formal process with support from the Students’ Union.
While little national attention has been paid in recent years to these systems and pathways they represent pretty much the only chance that an issue students are currently facing can be addressed before it becomes permanent.
The question needs to be whether students are aware of these routes and feel confident in using them – it’s fair to say that experience is mixed across the sector. Some providers are very responsive to the student voice, others may not be as quick or as effective as they should be. Our only measure of these things is via the National Student Survey – about 80 per cent of the students in the 2025 cohort agree that students’ opinions about their course are valued by staff, while a little over two-thirds agree that it is clear that student feedback is acted upon.
Both these are up on equivalent questions about five years ago, suggesting a slow improvement in such work, but there is scope for such systems to be reviewed and promoted nationally – everything else is just a way for students to possibly seek redress long after anything could be done about it.
I’m a graduate and I don’t know what my degree is worth/ I’m an employer and I need graduate skills
The value of a degree is multifaceted – and links as much to the reputation of a provider or course as to the hard work of a student.
On the former much the heavy lifting is done by the way the design of a course conforms to recognised standards. For more vocational courses, these are likely to have been set by professional, statutory, and regulatory bodies (PSRBs) – independent bodies who set requirements (with varying degrees of specificity) around what should be taught on a course and what a graduate should be capable of doing or understanding.
Where no PSRB exists, course designers are likely to map to the QAA Subject Benchmarks, or to draw on external perspectives from academics in other universities. As links between universities and local employment needs solidify, the requirements set by local skills improvement plans (LSIPs) will play a growing part – and it is very likely that these will be mapped to the UK Standard Skills Classification descriptors.
The academic standing of a provider is nominally administered by the regulator – in England the Office for Students has power to deregister a provider where there are concerns, making it ineligible for state funding and sparking a media firestorm that will likely torch any remaining residual esteem. Events like this are rare – standards are generally maintained via a semi-formal system of cross-provider benchmarking and external examination, leavened by the occasional action of whistleblowers.
That’s also a pretty good description about how we assure that the mark a graduate awarded makes sense when compared to the marks awarded to other graduates. External examiners here play a role in ensuring that standards are consistent within a subject, albeit usually at module rather than course level; it’s another system that has been allowed (and indeed actively encouraged) to atrophy, but it still remains the only way of doing this stuff in anything approaching real time.
I’m an international partner and I can’t be sure that these qualifications align with what we do
Collaborating internationally, or even studying internationally, often requires some very specific statements around the quality of provision. One popular route to doing this is being able to assert that your provider meets well-understood international standards – the ESG (standards and guidelines for quality assurance in the European Higher Education Area) represent probably the most common example.
Importantly, the ESG does not set standards about teaching and learning, or awarding qualifications – it sets standards for the way institutional quality assurance processes are assessed by national bodies. If you think that this is incredibly arm’s length you would be right, but it is also the only way of ensuring that the bits of quality assurance that interface with the student experience in near-real-time actually work.
I am an academic and I want to design courses and teach students in ways that help students to succeed
Quality enhancement – beyond compliance with academic standards – is about supporting academic staff in making changes to teaching and learning practice (how lectures are delivered, how assessments are designed, how individual support is offered). It is often seen as an add-on, but should really be seen as a core component of any system of quality assurance. Indeed, in Scotland, regulatory quality assurance in the form of the Tertiary Quality Enhancement Framework starts from the premise that tertiary provision needs to be “high quality” and “improving”.
Outside of Scotland the vestiges of a previous UK wide approach to quality enhancement exists in the form of AdvanceHE. Many academic staff will first encounter the principles and practice of teaching quality enhancement via developing a portfolio to submit for fellowship – increasingly a prerequisite for academic promotions. AdvanceHE also supports standards which are designed to underpin training in teaching for new academic staff, and support networks. The era of institutional “learning and teaching offices” (another vestige of a previous government-sponsored measure to support enhancement) is mostly over, but many providers have networks of staff with an interest in the practice of teaching in higher education.
So what does the OfS actually do?
In England, the Office for Students operates a deficit model of quality assurance. It assumes that, unless there is some evidence to the contrary, an institution is delivering higher education at an appropriate level of quality. Where the evidence exists for poor performance, the regulator will intervene directly. This is the basis of a “risk based” approach to quality assurance, where more effort can be expended in areas of concern and less burden placed on providers.
For a system like this to work in a way that addresses any of the needs detailed above, OfS would need far more, and more detailed, information on where things are going wrong as soon as they happen. It would need to be bold in acting quickly, often based on incomplete or emerging evidence. Thus far, OfS has been notably adverse to legal risk (having had its fingers burned by the Bloomsbury case), and has failed (despite a sustained attempt in the much-maligned Data Futures) to meaningfully modernise the process of data collection and analysis.
It would be simpler and cheaper for OfS to support and develop institutions’ own mechanisms to support quality and academic standards – an approach that would allow for student issues to be dealt with quickly and effectively at that level. A stumbling block here would be the diversity of the sector, with the unique forms and small scale of some providers making it difficult to design any form of standardisation into these systems. The regulator itself, or another body such as the Office for the Independent Adjudicator (as happens now), would act as a backstop for instances where these processes do not produce satisfactory results.
The budget of the Office for Students has grown far beyond the ability of the sector to support it (as was originally intended) via subscription. It receives more than £10m a year from the Department for Education to cover its current level of activity – it feels unlikely that more funds will arrive from either source to enable it to quality assure 420 providers directly.
All of this would be moot if there were no current concerns about quality and standards. And there are many – stemming both from corners being cut (and systems being run beyond capacity) due to financial pressures, and from a failure to regulate in a way that grows and assures a provider’s own capacity to manage quality and standards. We’ve seen evidence from the regulator itself that the combination of financial and regulatory failures has led to many examples of quality and standards problems: course and modules closed without suitable alternatives for students, difficulties faced by students in accessing staff and facilities due to overcrowding or underprovision, and concerns about an upward pressure on marks from a need to bolster continuation and completion rates.
The route through the current crisis needs to be through improvement in providers’ own processes, and that would take something that the OfS has not historically offered the sector: trust.

