Tag: assurance

  • Quality assurance behind the dashboard

    Quality assurance behind the dashboard

    The depressing thing about the contemporary debate on the quality of higher education in England is how limited it is.

    From the outside, everything is about structures, systems, and enforcement: the regulator will root out “poor quality courses” (using data of some sort), students have access to an ombuds-style service in the Office for the Independent Adjudicator, the B3 and TEF arrangements mean that regulatory action will be taken. And so on.

    The proposal on the table from the Office for Students at the moment doubles down on a bunch of lagging metrics (continuation, completion, progression) and one limited lagging measure of student satisfaction (NSS) underpinning a metastasised TEF that will direct plaudits or deploy increasingly painful interventions based on a single precious-metal scale.

    All of these sound impressive, and may give your academic registrar sleepless nights – but none of them offer meaningful and timely redress to the student who has turned up for a 9am lecture to find that nobody has turned up to deliver it – again. Which is surely the point.

    It is occasionally useful to remember how little this kind of visible sector level quality assurance systems have to do with actual quality assurance as experienced by students and others, so let’s look at how things currently work and break it down by need state.

    I’m a student and I’m having a bad time right now

    Continuation data and progression data published in 2025 reflects the experience of students who graduated between 2019 and 2022; completion data refers to cohorts between 2016 and 2019; the NSS reflects the opinions of final year students and is published the summer after they graduate. None of these contain any information about what is happening in labs, lecture theatres, and seminar rooms right now.

    As students who have a bad experience in higher education don’t generally get the chance to try it again, any useful system of quality assurance needs to be able to help students in the moment – and the only realistic way that this can happen is via processes within a provider.

    From the perspective of the student the most common of these are module feedback (the surveys conducted at the end of each unit of teaching) and the work of the student representative (a peer with the ability to feedback on behalf of students). Beyond this students have the ability to make internal complaints, ranging from a quiet word with the lecturer after the seminar to a formal process with support from the Students’ Union.

    While little national attention has been paid in recent years to these systems and pathways they represent pretty much the only chance that an issue students are currently facing can be addressed before it becomes permanent.

    The question needs to be whether students are aware of these routes and feel confident in using them – it’s fair to say that experience is mixed across the sector. Some providers are very responsive to the student voice, others may not be as quick or as effective as they should be. Our only measure of these things is via the National Student Survey – about 80 per cent of the students in the 2025 cohort agree that students’ opinions about their course are valued by staff, while a little over two-thirds agree that it is clear that student feedback is acted upon.

    Both these are up on equivalent questions about five years ago, suggesting a slow improvement in such work, but there is scope for such systems to be reviewed and promoted nationally – everything else is just a way for students to possibly seek redress long after anything could be done about it.

    I’m a graduate and I don’t know what my degree is worth/ I’m an employer and I need graduate skills

    The value of a degree is multifaceted – and links as much to the reputation of a provider or course as to the hard work of a student.

    On the former much the heavy lifting is done by the way the design of a course conforms to recognised standards. For more vocational courses, these are likely to have been set by professional, statutory, and regulatory bodies (PSRBs) – independent bodies who set requirements (with varying degrees of specificity) around what should be taught on a course and what a graduate should be capable of doing or understanding.

    Where no PSRB exists, course designers are likely to map to the QAA Subject Benchmarks, or to draw on external perspectives from academics in other universities. As links between universities and local employment needs solidify, the requirements set by local skills improvement plans (LSIPs) will play a growing part – and it is very likely that these will be mapped to the UK Standard Skills Classification descriptors.

    The academic standing of a provider is nominally administered by the regulator – in England the Office for Students has power to deregister a provider where there are concerns, making it ineligible for state funding and sparking a media firestorm that will likely torch any remaining residual esteem. Events like this are rare – standards are generally maintained via a semi-formal system of cross-provider benchmarking and external examination, leavened by the occasional action of whistleblowers.

    That’s also a pretty good description about how we assure that the mark a graduate awarded makes sense when compared to the marks awarded to other graduates. External examiners here play a role in ensuring that standards are consistent within a subject, albeit usually at module rather than course level; it’s another system that has been allowed (and indeed actively encouraged) to atrophy, but it still remains the only way of doing this stuff in anything approaching real time.

    I’m an international partner and I can’t be sure that these qualifications align with what we do

    Collaborating internationally, or even studying internationally, often requires some very specific statements around the quality of provision. One popular route to doing this is being able to assert that your provider meets well-understood international standards – the ESG (standards and guidelines for quality assurance in the European Higher Education Area) represent probably the most common example.

    Importantly, the ESG does not set standards about teaching and learning, or awarding qualifications – it sets standards for the way institutional quality assurance processes are assessed by national bodies. If you think that this is incredibly arm’s length you would be right, but it is also the only way of ensuring that the bits of quality assurance that interface with the student experience in near-real-time actually work.

    I am an academic and I want to design courses and teach students in ways that help students to succeed

    Quality enhancement – beyond compliance with academic standards – is about supporting academic staff in making changes to teaching and learning practice (how lectures are delivered, how assessments are designed, how individual support is offered). It is often seen as an add-on, but should really be seen as a core component of any system of quality assurance. Indeed, in Scotland, regulatory quality assurance in the form of the Tertiary Quality Enhancement Framework starts from the premise that tertiary provision needs to be “high quality” and “improving”.

    Outside of Scotland the vestiges of a previous UK wide approach to quality enhancement exists in the form of AdvanceHE. Many academic staff will first encounter the principles and practice of teaching quality enhancement via developing a portfolio to submit for fellowship – increasingly a prerequisite for academic promotions. AdvanceHE also supports standards which are designed to underpin training in teaching for new academic staff, and support networks. The era of institutional “learning and teaching offices” (another vestige of a previous government-sponsored measure to support enhancement) is mostly over, but many providers have networks of staff with an interest in the practice of teaching in higher education.

    So what does the OfS actually do?

    In England, the Office for Students operates a deficit model of quality assurance. It assumes that, unless there is some evidence to the contrary, an institution is delivering higher education at an appropriate level of quality. Where the evidence exists for poor performance, the regulator will intervene directly. This is the basis of a “risk based” approach to quality assurance, where more effort can be expended in areas of concern and less burden placed on providers.

    For a system like this to work in a way that addresses any of the needs detailed above, OfS would need far more, and more detailed, information on where things are going wrong as soon as they happen. It would need to be bold in acting quickly, often based on incomplete or emerging evidence. Thus far, OfS has been notably adverse to legal risk (having had its fingers burned by the Bloomsbury case), and has failed (despite a sustained attempt in the much-maligned Data Futures) to meaningfully modernise the process of data collection and analysis.

    It would be simpler and cheaper for OfS to support and develop institutions’ own mechanisms to support quality and academic standards – an approach that would allow for student issues to be dealt with quickly and effectively at that level. A stumbling block here would be the diversity of the sector, with the unique forms and small scale of some providers making it difficult to design any form of standardisation into these systems. The regulator itself, or another body such as the Office for the Independent Adjudicator (as happens now), would act as a backstop for instances where these processes do not produce satisfactory results.

    The budget of the Office for Students has grown far beyond the ability of the sector to support it (as was originally intended) via subscription. It receives more than £10m a year from the Department for Education to cover its current level of activity – it feels unlikely that more funds will arrive from either source to enable it to quality assure 420 providers directly.

    All of this would be moot if there were no current concerns about quality and standards. And there are many – stemming both from corners being cut (and systems being run beyond capacity) due to financial pressures, and from a failure to regulate in a way that grows and assures a provider’s own capacity to manage quality and standards. We’ve seen evidence from the regulator itself that the combination of financial and regulatory failures has led to many examples of quality and standards problems: course and modules closed without suitable alternatives for students, difficulties faced by students in accessing staff and facilities due to overcrowding or underprovision, and concerns about an upward pressure on marks from a need to bolster continuation and completion rates.

    The route through the current crisis needs to be through improvement in providers’ own processes, and that would take something that the OfS has not historically offered the sector: trust.

    Source link

  • TEF6: the incredible machine takes over quality assurance regulation

    TEF6: the incredible machine takes over quality assurance regulation

    If you loved the Teaching Excellence Framework, were thrilled by the outcomes (B3) thresholds, lost your mind for the Equality of Opportunity Risk Register, and delighted to the sporadic risk-based OfS investigations based on years-old data you’ll find a lot to love in the latest set of Office for Students proposals on quality assurance.

    In today’s Consultation on the future approach to quality regulation you’ll find a cyclical, cohort based TEF that also includes a measurement (against benchmarks) of compliance with the thresholds for student outcomes inscribed in the B3 condition. Based on the outcomes of this super-TEF and prioritised based on assessment of risk, OfS will make interventions (including controls on recruitment and the conditions of degree awarding powers) and targeted investigation. This is a first stage consultation only, stage two will come in August 2026.

    It’s not quite a grand unified theory: we don’t mix in the rest of the B conditions (covering less pressing matters like academic standards, the academic experience, student support, assessment) because, in the words of OfS:

    Such an approach would be likely to involve visits to all providers, to assess whether they meet all the relevant B conditions of registration

    The students who are struggling right now with the impacts of higher student/staff ratios and a lack of capacity due to over-recruitment will greatly appreciate this reduction in administrative burden.

    Where we left things

    When we last considered TEF we were expecting an exercise every four years, drawing on provider narrative submissions (which included a chunk on a provider’s own definition and measurement of educational gain), students’ union narrative submissions, and data on outcomes and student satisfactions. Providers were awarded a “medal” for each of student outcomes and student experience – a matrix determined whether this resulted in an overall Bronze, Silver, Gold or Requires Improvement.

    The first three of these awards were deemed to be above minimum standards (with slight differences between each), while the latter was a portal to the much more punitive world of regulation under group B (student experience) conditions of registration. Most of the good bits of this approach came from the genuinely superb Pearce Review of TEF conducted under section 26 of the Higher Education and Research Act, which fixed a lot of the statistical and process nonsense that had crept in under previous iterations and then-current plans (though not every recommendation was implemented).

    TEF awards were last made in 2023, with the next iteration – involving all registered providers plus anyone else who wanted to play along – was due in 2027.

    Perma-TEF

    A return to a rolling TEF rather than a quadrennial quality enhancement jamboree means a pool of TEF assessors rather than a one-off panel. There will be steps taken to ensure that an appropriate group of academic and student assessors is selected to assess each cohort – there will be special efforts made to use those with experience of smaller, specialist, and college-based providers – and a tenure of two-to-three years is planned. OfS is also considering whether its staff can be included among the storied ranks of those empowered to facilitate ratings decisions.

    Likewise, we’ll need a more established appeals system. Open only to those with Bronze or Needs Improvement ratings (Gold and Silver are passing grades) it would be a way to potentially forestall engagement and investigations based on an active risk to student experience or outcomes, or a risk of a future breach of a condition of registration for Bronze or Requires Improvement.

    Each provider would be assessed once every three years – all providers taking part in the first cycle would be assessed in either 2027-28, 2028-29, or 2029-30 (which covers only undergraduate students because there’s no postgraduate NSS yet – OfS plan to develop one before 2030). In many cases they’ll only know which one at the start of the academic year in question, which will give them six months to get their submissions sorted.

    Because Bronze is now bad (rather than “good but not great” as it used to be) the first year’s could well include all providers with a 2023 Bronze (or Requires Improvement) rating, plus some with increased risks of non-compliance, some with Bronze in one of the TEF aspects, and some without a rating.

    After this, how often you are assessed depends on your rating – if you are Gold overall it is five years till the next try, Silver means four years, and Bronze three (if you are “Requires Improvement” you probably have other concerns beyond the date of your next assessment) but this can be tweaked if OfS decides there is an increased risk to quality or for any other reason.

    Snakes and ladders

    Ignore the gradations and matrices in the Pearce Review – the plan now is that your lowest TEF aspect rating (remember you got sub-awards last time for student experience and student outcomes) will be your overall rating. So Silver for experience and Bronze for outcomes makes for an overall Bronze. As OfS has decided that you now have to pay (likely around £25,000) to enter what is a compulsory exercise this is a cost that could lead to a larger cost in future.

    In previous TEFs, the only negative consequence for those outside of the top ratings have been reputational – a loss of bragging rights of, arguably, negligible value. The new proposals align Bronze with the (B3) minimum required standards and put Requires Improvement below these: in the new calculus of value the minimum is not good enough and there will be consequences.

    We’ve already had some hints that a link to fee cap levels is back on the cards, but in the meantime OfS is pondering a cap on student numbers expansion to punish those who turn out Bronze or Requires Improvement. The workings of the expansion cap will be familiar to those who recall the old additional student numbers process – increases of more than five per cent (the old tolerance band, which is still a lot) would not be permitted for poorly rated providers.

    For providers without degree awarding powers it is unlikely they will be successful in applying for them with Bronze and below – but OfS is also thinking about restricting aspects of existing providers DAPs, for example limiting their ability to subcontract or franchise provision in future. This is another de facto numbers cap in many cases, and is all ahead of a future consultation on DAPs that could make for an even closer link with TEF.

    Proposals for progression

    Proposal 6 will simplify the existing B3 thresholds, and integrate the way they are assessed into the TEF process. In a nutshell, the progression requirement for B3 would disappear – with the assessment made purely on continuation and completion, with providers able to submit contextual and historic information to explain why performance is not above the benchmark or threshold as a part of the TEF process.

    Progression will still be considered at the higher levels of TEF, and here contextual information can play more of a part – with what I propose we start calling the Norland Clause allowing providers to submit details of courses that lead to jobs that ONS does not consider as professional or managerial. That existing indicator will be joined by another based on (Graduate Outcomes) graduate reflections on how they are using what they have learned, and benchmarked salaries three years after graduation from DfE’s Longitudinal Educational Outcomes (LEO) data – in deference to that random Kemi Badenoch IFS commission at the tail end of the last parliament.

    Again, there will be contextual benchmarks for these measures (and hopefully some hefty caveating on the use of LEO median salaries) – and, as is the pattern in this consultation, there are detailed proposals to follow.

    Marginal gains, marginal losses

    The “educational gains” experiment, pioneered in the last TEF, is over: making this three times that a regulator in England has tried and failed to include a measure of learning gain in some form of regulation. OfS is still happy for you to mention your education gain work in your next narrative submission, but it isn’t compulsory. The reason: reducing burden, and a focus on comparability rather than a diversity of bespoke measures.

    Asking providers what something means in their context, rather than applying a one-size-fits-all measure of student success was an immensely powerful component of the last exercise. Providers who started on that journey at considerable expense in data gathering and analysis may be less than pleased at this latest development – and we’d certainly understood that DfE were fans of the approach too.

    Similarly, the requirement for students to feed back on students in their submissions to TEF has been removed. The ostensible reason is that students found it difficult last time round – the result is that insight from the valuable networks between existing students and their recently graduated peers is lost. The outcomes end of TEF is now very much data driven with only the chance to explain unusual results offered. It’s a retreat from some of the contextual sense that crept in with the Pearce Review.

    Business as usual

    Even though TEF now feels like it is everywhere and for always, there’s still a place for OfS’ regular risk-based monitoring – and annex I (yes, there’s that many annexes) contains a useful draft monitoring tool.

    Here it is very good to see staff:student ratios, falling entry requirements, a large growth in foundation year provision, and a rapid growth in numbers among what are noted as indicators of risk to the student experience. It is possible to examine an excellent system designed outside of the seemingly inviolate framework of the TEF where events like this would trigger an investigation of provider governance and quality assurance processes.

    Alas, the main use of this monitoring is to decide whether or not to bring a TEF assessment forward, something that punts an immediate risk to students into something that will be dealt with retrospectively. If I’m a student on a first year that has ballooned from 300 to 900 from one cycle to the next there is a lot of good a regulator can do by acting quickly – I am unlikely to care whether a Bronze or Silver award is made in a couple of years’ time.

    International principles

    One of the key recommendations of the Behan review on quality was a drawing together of the various disparate (and, yes, burdensome) streams of quality and standards assurance and enhancement into a unified whole. We obviously don’t quite get there – but there has been progress made towards another key sector bugbear that came up both in Behan and the Lords’ Industry and Regulators Committee review: adherence to international quality assurance standards (to facilitate international partnerships and, increasingly, recruitment).

    OfS will “work towards applying to join the European Quality Assurance Register for Higher Education” at the appropriate time – clearly feeling that the long overdue centring of the student voice in quality assurance (there will be an expanded role for and range of student assessors) and the incorporation of a cyclical element (to desk assessments at least) is enough to get them over the bar.

    It isn’t. Principle 2.1 of the EQAR ESG requires that “external quality assurance should address the effectiveness of the internal quality assurance processes” – philosophically establishing the key role of providers themselves in monitoring and upholding the quality of their own provision, with the external assurance process primarily assessing whether (and how well) this has been done. For whatever reason OfS believes the state (in the form of the regulator) needs to be (and is capable of being!) responsible for all, quality assurance everywhere, all the time. It’s a glaring weakness of the OfS system that urgently needs to be addressed. And it hasn’t been, this time.

    The upshot is that while the new system looks ESG-ish, it is unlikely to be judged to be in full compliance.

    Single word judgements

    The recent use of single headline judgements of educational quality being used in ways that have far reaching regulatory implications is hugely problematic. The government announced the abandonment of the old “requires improvement, inadequate, good, and outstanding” judgements for schools in favour of a more nuanced “report card approach” – driven in part by the death by suicide of headteacher Ruth Perry in 2023. The “inadequate” rating given to her Cavendish Primary School would have meant forced academisation and deeper regulatory oversight.

    Regulation and quality assurance in education needs to be rigorous and reliable – it also needs to be context-aware and focused on improvement rather than retribution. Giving single headline grades cute, Olympics-inspired names doesn’t really cut it – and as we approach the fifth redesign of an exercise that has only run six times since 2016 you would perhaps think that rather harder questions need to be asked about the value (and cost!) of this undertaking.

    If we want to assess and control the risks of modular provision, transnational education, rapid expansion, and a growing number of innovations in delivery we need providers as active partners in the process. If we want to let universities try new things we need to start from a position that we can trust universities to have a focus on the quality of the student experience that is robust and transparent. We are reaching the limits of the current approach. Bad actors will continue to get away with poor quality provision – students won’t see timely regulatory action to prevent this – and eventually someone is going to get hurt.

    Source link

  • Quality assurance needs consideration, not change for change’s sake

    Quality assurance needs consideration, not change for change’s sake

    It’s been a year since publication of the Behan review and six months since OfS promised to “transform” their approach to quality assessment in response. But it’s still far from clear what this looks like, or if the change is what the sector really needs.

    In proposals for a new strategy published back in December OfS suggested a refocus of regulatory activity to concentrate on three strategic priorities of quality, the wider student experience and financial resilience. But while much of the mooted activity within experience and resilience themes felt familiar, when it came to quality, more radical change was clearly on the agenda.

    The plans are heavily influenced by findings of last summer’s independent review (the Behan review). This critiqued what it saw as minimal interaction between assessment relating to baseline compliance and excellence, and recommended bringing these strands together to focus on general improvement of quality throughout the sector. In response OfS pledged to ‘transform’ quality assessment, retaining TEF at the core of an integrated approach and developing more routine and widespread activity.

    Current concerns

    Unfortunately, these bare bones proposals raised more questions about the new integrated approach than they answered and if OfS ‘recent blog update was a welcome attempt to do more in the way of delivering timely and transparent information to providers, it disappointed on detail. OfS have been discussing key issues such as the extent of integration, scope for a new TEF framework, and methods of assessment. But while a full set of proposals will be out for consultation in the autumn, in the meantime, there’s little to learn other than to expect a very different TEF which will probably operate on a rolling cycle (assessing all institutions over a four to five year period).

    The inability to cement preparations for the next TEF will cause some frustration for providers. However, if as the tone of communications suggests, OfS is aiming for more disruptive integration above an expansion of the TEF proposals may present some bigger concerns for the sector.

    A fundamental concern is whether an integrated approach aimed at driving overall improvement is the most effective way to tackle the sector’s current challenges around quality. Behan’s review warns against an overemphasis on baseline regulation, but below standard provision from a significant minority of providers is where the most acute risks to students, taxpayers and sector reputation lie (as opposed to failure to improve quality for the majority performing above the baseline). Regulation should support improvement across the board too of course.

    However, it’s not clear how shifting focus away from the former, let alone moving it within a framework designed to assess excellence periodically, will usefully help OfS tackle stubborn pockets of poor provision and emerging threats within a dynamic sector.

    There is also an obvious tension inherent in any attempt to bring baseline regulation within a rolling cycle which is manifest as soon as OfS find serious concerns about provider quality mid cycle. Here we should expect OfS to intervene with investigation and enforcement where appropriate to protect the student and wider stakeholder interest. But doing so would essentially involve regulating on minimum standards on top of a system that’s aiming to do that already as part of an integrated approach. Moreover, if whistle blowing and lead indictors which OfS seem keen to develop to alert them to issues operate effectively, and if OfS start looking seriously at franchise and potentially TNE provision, it’s easy to imagine this duplication becoming widespread.

    There is also the issue of burden for both regulator and providers which should be recognised within any significant shift in approach. For OfS there’s a question of the extent to which developing and delivering an integrated approach is hindering ongoing quality assessment. Meanwhile, getting to grips with new regulatory processes, and aligning internal approaches to quality assurance and reporting will inevitably absorb significant provider resource. At a time when pressures are profound, this is likely to be particularly unwelcome and could detract significantly from the focus on delivery and students. Ironically it’s hard to see how transformative change might not hamper the improvements in quality across the board that Behan advocates and prove somewhat counter-productive to the pursuit of OfS’ other strategic goals.

    The challenge

    It’s crucial that OfS take time to consider how best to progress with any revised approach and sector consultation throughout the process is welcome. Nevertheless, development appears to be progressing slowly and somewhat at odds with OfS’ positioning as an agile and confident regulator operating in a dynamic landscape. Maybe this should tell us something about the difficulties inherent in developing an integrated approach.

    There’s much to admire about the Behan review and OfS’ responsiveness to the recommendations is laudable. But while Behan looks to the longer term, I’m not convinced that in the current climate there’s much wrong with the idea of maintaining the incumbent framework.

    Let’s not forget that this was established by OfS only three years ago following significant development and consultation to ensure a judicious approach.

    I wonder if the real problem here is that, in contrast to a generally well received TEF (and as Behan highlights), OfS’ work on baseline quality regulation simply hasn’t progressed with the speed, clarity and bite that was anticipated and necessary to drive positive change above the minimum were needed. And I wonder if a better solution to pressing quality concerns would be for OfS to concentrate resources on improving operation of the current framework. There certainly feels room to deliver more, more responsive, more transparent and more impactful baseline investigations without radical change. At the same time, the feat of maintaining a successful and much expanded TEF seems much more achievable without bringing a significant amount of assurance activity within its scope.

    We may yet see a less intrusive approach to integration proposed by OfS. I think this could be a better way forward – less burdensome and more suited to the sector’s current challenges. As the regulator reflects on their approach over the summer with a new chair at the helm who’s closer to the provider perspective and more distanced from the independent review, perhaps this is one which they will lean towards.

    Source link