Category: TEF

  • From improvement to compliance – a significant shift in the purpose of the TEF

    From improvement to compliance – a significant shift in the purpose of the TEF

    The Teaching Excellence Framework has always had multiple aims.

    It was partly intended to rebalance institutional focus from research towards teaching and student experience. Jo Johnson, the minister who implemented it, saw it as a means of increasing undergraduate teaching resources in line with inflation.

    Dame Shirley Pearce prioritised enhancing quality in her excellent review of TEF implementation. And there have been other purposes of the TEF: a device to support regulatory interventions where quality fell below required thresholds, and as a resource for student choice.

    And none of this should ignore its enthusiastic adoption by student recruitment teams as a marketing tool.

    As former Chair and Deputy Chair of the TEF, we are perhaps more aware than most of these competing purposes, and more experienced in understanding how regulators, institutions and assessors have navigated the complexity of TEF implementation. The TEF has had its critics – something else we are keenly aware of – but it has had a marked impact.

    Its benchmarked indicator sets have driven a data-informed and strategic approach to institutional improvement. Its concern with disparities for underrepresented groups has raised the profile of equity in institutional education strategies. Its whole institution sweep has made institutions alert to the consequences of poorly targeted education strategies and prioritised improvement goals. Now, the publication of the OfS’s consultation paper on the future of the TEF is an opportunity to reflect on how the TEF is changing and what it means for the regulatory and quality framework in England.

    A shift in purpose

    The consultation proposes that the TEF becomes part of what the OfS sees as a more integrated quality system. All registered providers will face TEF assessments, with no exemptions for small providers. Given the number of new providers seeking OfS registration, it is likely that the number to be assessed will be considerably larger than the 227 institutions in the 2023 TEF.

    Partly because of the larger number of assessments to be undertaken, TEF will move to a rolling cycle, with a pool of assessors. Institutions will still be awarded three grades – one for outcomes, one for experience and one overall, but their overall grade will simply be the lower of the two other grades. The real impact of this will be on Bronze-rated providers who could find themselves subject to a range of measures, potentially including student number controls or fee constraints, until they show improvement.

    The OfS consultation paper marks a significant shift in the purpose of the TEF, from quality enhancement to regulation and from improvement to compliance. The most significant changes are at the lower end of assessed performance. The consultation paper makes sensible changes to aspects of the TEF which always posed challenges for assessors and regulators, tidying up the relationship between the threshold B3 standards and the lowest TEF grades. It correctly separates measures of institutional performance on continuation and completion – over which institutions have more direct influence – from progression to employment – over which institutions have less influence.

    Pressure points

    But it does this at some heavy costs. By treating the Bronze grade as a measure of performance at, rather than above, threshold quality, it will produce just two grades above the threshold. In shifting the focus towards quantitative indicators and away from institutional discussion of context, it will make TEF life more difficult for further education institutions and institutions in locations with challenging graduate labour markets. The replacement of the student submission with student focus groups may allow more depth on some issues, but comes at the expense of breadth, and the student voice is, disappointingly, weakened.

    There are further losses as the regulatory purpose is embedded. The most significant is the move away from educational gain, and this is a real loss: following TEF 2023, almost all institutions were developing their approaches to and evaluation of educational gain, and we have seen many examples where this was shaping fruitful approaches to articulating institutional goals and the way they shape educational provision.

    Educational gain is an area in which institutions were increasingly thinking about distinctiveness and how it informs student experience. It is a real loss to see it go, and it will weaken the power of many education strategies. It is almost certainly the case that the ideas of educational gain and distinctiveness are going to be required for confident performance at the highest levels of achievement, but it is a real pity that it is less explicit. Educational gain can drive distinctiveness, and distinctiveness can drive quality.

    Two sorts of institutions will face the most significant challenges. The first, obviously, are providers rated Bronze in 2023, or Silver-rated providers whose indicators are on a downward trajectory. Eleven universities were given a Bronze rating overall in the last TEF exercise – and 21 received Bronze either for the student experience or student outcomes aspects. Of the 21, only three Bronzes were for student outcomes, but under the OfS plans, all would be graded Bronze, since any institution would be given its lowest aspect grade as its overall grade. Under the proposals, Bronze-graded institutions will need to address concerns rapidly to mitigate impacts on growth plans, funding, prestige and competitive position.

    The second group facing significant challenges will be those in difficult local and regional labour markets. Of the 18 institutions with Bronze in one of the two aspects of TEF 2023, only three were graded bronze for student outcomes, whereas 15 were for student experience. Arguably this was to be expected when only two of the six features of student outcomes had associated indicators: continuation/completion and progression.

    In other words, if indicators were substantially below benchmark, there were opportunities to show how outcomes were supported and educational gain was developed. Under the new proposals, the approach to assessing student outcomes is largely, if not exclusively, indicator-based, for continuation and completion. The approach is likely to reinforce differences between institutions, and especially those with intakes from underrepresented populations.

    The stakes

    The new TEF will play out in different ways in different parts of the sector. The regulatory focus will increase pressure on some institutions, whilst appearing to relieve it in others. For those institutions operating at 2023 Bronze levels or where 2023 Silver performance is declining, the negative consequences of a poor performance in the new TEF, which may include student number controls, will loom large in institutional strategy. The stakes are now higher for these institutions.

    On the other hand, institutions whose graduate employment and earnings outcomes are strong, are likely to feel more relieved, though careful reading of the grade specifications for higher performance suggests that there is work to be done on education strategies in even the best-performing 2023 institutions.

    In public policy, lifting the floor – by addressing regulatory compliance – and raising the ceiling – by promoting improvement – at the same time is always difficult, but the OfS consultation seems to have landed decisively on the side of compliance rather than improvement.

    Source link

  • OfS’ understanding of the student interest requires improvement

    OfS’ understanding of the student interest requires improvement

    When the Office for Students’ (OfS) proposals for a new quality assessment system for England appeared in the inbox, I happened to be on a lunchbreak from delivering training at a students’ union.

    My own jaw had hit the floor several times during my initial skim of its 101 pages – and so to test the validity of my initial reactions, I attempted to explain, in good faith, the emerging system to the student leaders who had reappeared for the afternoon.

    Having explained that the regulator was hoping to provide students with a “clear view of the quality of teaching and learning” at the university, their first confusion was tied up in the idea that this was even possible in a university with 25,000 students and hundreds of degree courses.

    They’d assumed that some sort of dashboard might be produced that would help students differentiate between at least departments if not courses. When I explained that the “view” would largely be in the form of a single “medal” of Gold, Silver, Bronze or Requires improvement for the whole university, I was met with confusion.

    We’d spent some time before the break discussing the postgraduate student experience – including poor induction for international students, the lack of a policy on supervision for PGTs, and the isolation that PGRs had fed into the SU’s strategy exercise.

    When I explained that OfS was planning to introduce a PGT NSS in 2028 and then use that data in the TEF from 2030-31 – such that their university might not have the data taken into account until 2032-33 – I was met with derision. When I explained that PGRs may be incorporated from 2030–31 onwards, I was met with scorn.

    Keen to know how students might feed in, one officer asked how their views would be taken into account. I explained that as well as the NSS, the SU would have the option to create a written submission to provide contextual insight into the numbers. When one of them observed that “being honest in that will be a challenge given student numbers are falling and so is the SU’s funding”, the union’s voice coordinator (who’d been involved in the 2023 exercise) in the corner offered a wry smile.

    One of the officers – who’d had a rewarding time at the university pretty much despite their actual course – wanted to know if the system was going to tackle students like them not really feeling like they’d learned anything during their degree. Given the proposals’ intention to drop educational gain altogether, I moved on at this point. Young people have had enough of being let down.

    I’m not at home in my own home

    Back in February, you might recall that OfS published a summary of a programme of polling and focus groups that it had undertaken to understand what students wanted and needed from their higher education – and the extent to which they were getting it.

    At roughly the same time, it published proposals for a new initial Condition C5: Treating students fairly, to apply initially to newly registered providers, which drew on that research.

    As well as issues it had identified with things like contractual provisions, hidden costs and withdrawn offers, it was particularly concerned with the risk that students may take a decision about what and where to study based on false, misleading or exaggerated information.

    OfS’ own research into the Teaching Excellence Framework 2023 signals one of the culprits for that misleading. Polling by Savanta in April and May 2024, and follow-up focus groups with prospective undergraduates over the summer both showed that applicants consistently described TEF outcomes as too broad to be of real use for their specific course decisions.

    They wanted clarity about employability rates, continuation statistics, and job placements – but what they got instead was a single provider-wide badge. Many struggled to see meaningful differences between Gold and Silver, or to reconcile how radically different providers could both hold Gold.

    The evidence also showed that while a Gold award could reassure applicants, more than one in five students aware of their provider’s TEF rating disagreed that it was a fair reflection of their own experience. That credibility gap matters.

    If the TEF continues to offer a single label for an entire university, with data that are both dated and aggregated, there is a clear danger that students will once again be misled – this time not by hidden costs or unfair contracts, but by the regulatory tool that is supposed to help them make informed choices.

    You don’t know what I’m feeling

    Absolutely central to the TEF will remain results of the National Student Survey (NSS).

    OfS says that’s because “the NSS remains the only consistently collected, UK-wide dataset that directly captures students’ views on their teaching, learning, and academic support,” and because “its long-running use provides reliable benchmarked data which allows for meaningful comparison across providers and trends over time.”

    It stresses that the survey provides an important “direct line to student perceptions,” which balances outcomes data and adds depth to panel judgements. In other words, the NSS is positioned as an indispensable barometer of student experience in a system that otherwise leans heavily on outcomes.

    But set aside the fact that it surveys only those who make it to the final year of a full undergraduate degree. The NSS doesn’t ask whether students felt their course content was up to date with current scholarship and professional practice, or whether learning outcomes were coherent and built systematically across modules and years — both central expectations under B1 (Academic experience).

    It doesn’t check whether students received targeted support to close knowledge or skills gaps, or whether they were given clear help to avoid academic misconduct through essay planning, referencing, and understanding rules – requirements spelled out in the guidance to B2 (Resources, support and engagement). It also misses whether students were confident that staff were able to teach effectively online, and whether the learning environment – including hardware, software, internet reliability, and access to study spaces – actually enabled them to learn. Again, explicit in B2, but invisible in the survey.

    On assessment, the NSS asks about clarity, fairness, and usefulness of feedback, but it doesn’t cover whether assessment methods really tested what students had been taught, whether tasks felt valid for measuring the intended outcomes, or whether students believed their assessments prepared them for professional standards. Yet B4 (Assessment and awards) requires assessments to be valid and reliable, moderated, and robust against misconduct – areas NSS perceptions can’t evidence.

    I could go on. The survey provides snapshots of the learning experience but leaves out important perception checks on the coherence, currency, integrity, and fitness-for-purpose of teaching and learning, which the B conditions (and students) expect providers to secure.

    And crucially, OfS has chosen not to use the NSS questions on organisation and management in the future TEF at all. That’s despite its own 2025 press release highlighting it as one of the weakest-performing themes in the sector – just 78.5 per cent of students responded positively – and pointing out that disabled students in particular reported significantly worse experiences than their peers.

    OfS said then that “institutions across the sector could be doing more to ensure disabled students are getting the high quality higher education experience they are entitled to,” and noted that the gap between disabled and non-disabled students was growing in organisation and management. In other words, not only is the NSS not fit for purpose, OfS’ intended use of it isn’t either.

    I followed the voice, you gave to me

    In the 2023 iteration of the TEF, the independent student submission was supposed to be one of the most exciting innovations. It was billed as a crucial opportunity for providers’ students to tell their own story – not mediated through NSS data or provider spin, but directly and independently. In OfS’ words, the student submission provided “additional insights” that would strengthen the panel’s ability to judge whether teaching and learning really were excellent.

    In this consultation, OfS says it wants to “retain the option of student input,” but with tweaks. The headline change is that the student submission would no longer need to cover “student outcomes” – an area that SUs often struggled with given the technicalities of data and the lack of obvious levers for student involvement.

    On the surface, that looks like a kindness – but scratch beneath the surface, and it’s a red flag. Part of the point of Condition B2.2b is that providers must take all reasonable steps to ensure effective engagement with each cohort of students so that “those students succeed in and beyond higher education.”

    If students’ unions feel unable to comment on how the wider student experience enables (or obstructs) student success and progression, that’s not a reason to delete it from the student submission. It’s a sign that something is wrong with the way providers involve students in what’s done to understand and shape outcomes.

    The trouble is that the light touch response ignores the depth of feedback it has already commissioned and received. Both the IFF evaluation of TEF 2023 and OfS’ own survey of student contacts documented the serious problems that student reps and students’ unions faced.

    They said the submission window was far too short – dropping guidance in October, demanding a January deadline, colliding with elections, holidays, and strikes. They said the guidance was late, vague, inaccessible, and offered no examples. They said the template was too broad to be useful. They said the burden on small and under-resourced SUs was overwhelming, and even large ones had to divert staff time away from core activity.

    They described barriers to data access – patchy dashboards, GDPR excuses, lack of analytical support. They noted that almost a third didn’t feel fully free to say what they wanted, with some monitored by staff while writing. And they told OfS that the short, high-stakes process created self-censorship, strained relationships, and duplication without impact.

    The consultation documents brush most of that aside. Little in the proposals tackles the resourcing, timing, independence, or data access problems that students actually raised.

    I’m not at home in my own home

    OfS also proposes to commission “alternative forms of evidence” – like focus groups or online meetings – where students aren’t able to produce a written submission. The regulator’s claim is that this will reduce burden, increase consistency, and make it easier to secure independent student views.

    The focus group idea is especially odd. Student representatives’ main complaint wasn’t that they couldn’t find the words – it was that they lacked the time, resource, support, and independence to tell the truth. Running a one-off OfS focus group with a handful of students doesn’t solve that. It actively sidesteps the standard in B2 and the DAPs rules on embedding students in governance and representation structures.

    If a student body struggles to marshal the evidence and write the submission, the answer should be to ask whether the provider is genuinely complying with the regulatory conditions on student engagement. Farming the job out to OfS-run focus groups allows providers with weak student partnership arrangements to escape scrutiny – precisely the opposite of what the student submission was designed to do.

    The point is that the quality of a student submission is not just a “nice to have” extra insight for the TEF panel. It is, in itself, evidence of whether a provider is complying with Condition B2. It requires providers to take all reasonable steps to ensure effective engagement with each cohort of students, and says students should make an effective contribution to academic governance.

    If students can’t access data, don’t have the collective capacity to contribute, or are cowed into self-censorship, that is not just a TEF design flaw – it is B2 evidence of non-compliance. The fact that OfS has never linked student submission struggles to B2 is bizarre. Instead of drawing on the submissions as intelligence about engagement, the regulator has treated them as optional extras.

    The refusal to make that link is even stranger when compared to what came before. Under the old QAA Institutional Review process, the student written submission was long-established, resourced, and formative. SUs had months to prepare, could share drafts, and had the time and support to work with managers on solutions before a review team arrived. It meant students could be honest without the immediate risk of reputational harm, and providers had a chance to act before being judged.

    TEF 2023 was summative from the start, rushed and high-stakes, with no requirement on providers to demonstrate they had acted on feedback. The QAA model was designed with SUs and built around partnership – the TEF model was imposed by OfS and designed around panel efficiency. OfS has learned little from the feedback from those who submitted.

    But now I’ve gotta find my own

    While I’m on the subject of learning, we should finally consider how far the proposals have drifted from the lessons of Dame Shirley Pearce’s review. Back in 2019, her panel made a point of recording what students had said loud and clear – the lack of learning gain in TEF was a fundamental flaw.

    In fact, educational gain was the single most commonly requested addition to the framework, championed by students and their representatives who argued that without it, TEF risked reducing success to continuation and jobs.

    Students told the review they wanted a system that showed whether higher education was really developing their knowledge, skills, and personal growth. They wanted recognition of the confidence, resilience, and intellectual development that are as much the point of university as a payslip.

    Pearce’s panel agreed, recommending that Educational Gains should become a fourth formal aspect of TEF, encompassing both academic achievement and personal development. Crucially, the absence of a perfect national measure was not seen as a reason to ignore the issue. Providers, the panel said, should articulate their own ambitions and evidence of gain, in line with their mission, because failing to even try left a gaping hole at the heart of quality assessment.

    Fast forward to now, and OfS is proposing to abandon the concept entirely. To students and SUs who have been told for years that their views shape regulation, the move is a slap in the face. A regulator that once promised to capture the full richness of the student experience is now narrowing the lens to what can be benchmarked in spreadsheets. The result is a framework that tells students almost nothing about what they most want to know – whether their education will help them grow.

    You see the same lack of learning in the handling of extracurricular and co-curricular activity. For students, societies, volunteering, placements, and cocurricular opportunities are not optional extras but integral to how they build belonging, develop skills, and prepare for life beyond university. Access to these opportunities feature heavily in the Access and Participation Risk Register precisely because they matter to student success and because they’re a part of the educational offer in and of themselves.

    But in TEF 2023 OfS tied itself in knots over whether they “count” — at times allowing them in if narrowly framed as “educational”, at other times excluding them altogether. To students who know how much they learn outside of the lecture theatre, the distinction looked absurd. Now the killing off of educational gain excludes them all together.

    You should have listened

    Taken together, OfS has delivered a masterclass in demonstrating how little it has learned from students. As a result, the body that once promised to put student voice at the centre of regulation is in danger of constructing a TEF that is both incomplete and actively misleading.

    It’s a running theme – more evidence that OfS is not interested enough in genuinely empowering students. If students don’t know what they can, should, or could expect from their education – because the standards are vague, the metrics are aggregated, and the judgements are opaque – then their representatives won’t know either. And if their reps don’t know, their students’ union can’t effectively advocate for change.

    When the only judgements against standards that OfS is interested in come from OfS itself, delivered through a very narrow funnel of risk-based regulation, that funnel inevitably gets choked off through appeals to “reduced burden” and aggregated medals that tell students nothing meaningful about their actual course or experience. The result is a system that talks about student voice while systematically disempowering the very students it claims to serve.

    In the consultation, OfS says that it wants its new quality system to be recognised as compliant with the European Standards and Guidelines (ESG), which would in time allow it to seek membership of the European Quality Assurance Register (EQAR). That’s important for providers with international partnerships and recruitment ambitions, and for students given that ESG recognition underpins trust, mobility, and recognition across the European Higher Education Area.

    But OfS’ conditions don’t require co-design of the quality assurance framework itself, nor proof that student views shape outcomes. Its proposals expand student assessor roles in the TEF, but don’t guarantee systematic involvement in all external reviews or transparency of outcomes – both central to ESG. And as the ongoing QA-FIT project and ESU have argued, the next revision of the ESG is likely to push student engagement further, emphasising co-creation, culture, and demonstrable impact.

    If it does apply for EQAR recognition, our European peers will surely notice what English students already know – the gap between OfS’ rhetoric on student partnership and the reality of its actual understanding and actions is becoming impossible to ignore.

    When I told those student officers back on campus that their university would be spending £25,000 of their student fee income every time it has to take part in the exercise, their anger was palpable. When I added that according to the new OfS chair, Silver and Gold might enable higher fees, while Bronze or “Requires Improvement” might cap or further reduce their student numbers, they didn’t actually believe me.

    The student interest? Hardly.

    Source link

  • An assessor’s perspective on the Office for Students’ TEF shake-up

    An assessor’s perspective on the Office for Students’ TEF shake-up

    Across the higher education sector in England some have been waiting with bated breath for details of the proposed new Teaching Excellence Framework. Even amidst the multilayered preparations for a new academic year – the planning to induct new students, to teach well and assess effectively, to create a welcoming environment for all – those responsible for education quality have had one eye firmly on the new TEF.

    The OfS has now published its proposals along with an invitation to the whole sector to provide feedback on them by 11 December 2025. As an external adviser for some very different types of provider, I’m already hearing a kaleidoscope of changing questions from colleagues. When will our institution or organisation next be assessed if the new TEF is to run on a rolling programme rather than in the same year for everyone? How will the approach to assessing us change now that basic quality requirements are included alongside the assessment of educational ‘excellence’? What should we be doing right now to prepare?

    Smaller providers, including further education colleges that offer some higher education programmes, have not previously been required to participate in the TEF assessment. They will now all need to take part, so have a still wider range of questions about the whole process. How onerous will it be? How will data about our educational provision, both quantitative and qualitative, be gathered and assessed? What form will our written submission to the OfS need to take? How will judgements be made?

    As a member of TEF assessment panels through TEF’s entire lifecycle to date, I’ve read the proposals with great interest. From an assessor’s point of view, I’ve pondered on how the assessment process will change. Will the new shape of TEF complicate or help streamline the assessment process so that ratings can be fairly awarded for providers of every mission, shape and size?

    Panel focus

    TEF panels have always comprised experts from the whole sector, including academics, professional staff and student representatives. We have looked at the evidence of “teaching excellence” (I think of it as good education) from each provider very carefully. It makes sense that the two main areas of assessment, or “aspects” – student experience and student outcomes – will continue to be discrete areas of focus, leading to two separate ratings of either Gold, Silver, Bronze or Requires Improvement. That’s because the data for each of these can differ quite markedly within a single provider, so it can mislead students to conflate the two judgements.

    Diagram from page 18 of the consultation document

    Another positive continuity is the retention of both quantitative and qualitative evidence. Quantitative data include the detailed datasets provided by OfS, benchmarked against the sector. These are extremely helpful to assessors who can compare the experiences and outcomes of students from different demographics across the full range of providers.

    Qualitative data have previously come from 25-page written submissions from each provider, and from written student submissions. There are planned changes afoot for both of these forms of evidence, but they will still remain crucial.

    The written provider submissions may be shorter next time. Arguably there is a risk here, as submissions have always enabled assessors to contextualise the larger datasets. Each provider has its own story of setting out to make strategic improvements to their educational provision, and the submissions include both qualitative narrative and internally produced quantitative datasets related to the assessment criteria, or indicators.

    However, it’s reasonable for future submissions to be shorter as the student outcomes aspect will rely upon a more nuanced range of data relating to study outcomes as well as progression post-study (proposal 7). While it’s not yet clear what the full range of data will be, this approach is potentially helpful to assessors and to the sector, as students’ backgrounds, subject fields, locations and career plans vary greatly and these data take account of those differences.

    The greater focus on improved datasets suggests that there will be less reliance on additional information, previously provided at some length, on how students’ outcomes are being supported. The proof of the pudding for how well students continue with, complete and progress from their studies is in the eating, or rather in the outcomes themselves, rather than the recipes. Outcomes criteria should be clearer in the next TEF in this sense, and more easily applied with consistency.

    Another proposed change focuses on how evidence might be more helpfully elicited from students and their representatives (proposal 10). In the last TEF students were invited to submit written evidence, and some student submissions were extremely useful to assessors, focusing on the key criteria and giving a rounded picture of local improvements and areas for development. For understandable reasons, though, students of some providers did not, or could not, make a submission; the huge variations in provider size means that in some contexts students do not have the capacity or opportunity to write up their collective experiences. This variation was challenging for assessors, and anything that can be done to level the playing field for students’ voices next time will be welcomed.

    Towards the data limits

    Perhaps the greatest challenge for TEF assessors in previous rounds arose when we were faced with a provider with very limited data. OfS’s proposal 9 sets out to address this by varying the assessment approach accordingly. Where these is no statistical confidence in a provider’s NSS data (or no NSS data at all), direct evidence of students’ experiences with that provider will be sought, and where there is insufficient statistical confidence in a provider’s student outcomes, no rating will be awarded for that aspect.

    The proposed new approach to the outcomes rating makes great sense – it is so important to avoid reaching for a rating which is not supported by clear evidence. The plan to fill any NSS gap with more direct evidence from students is also logical, although it could run into practical challenges. It will be useful to see suggestions from the sector about how this might be achieved within differing local contexts.

    Finally, how might assessment panels be affected by changes to what we are assessing, and the criteria for awarding ratings? First, both aspects will incorporate the requirements of OfS’s B conditions – general ongoing, fundamental conditions of registration. The student experience aspect will now be aligned with B1 (course content and delivery), B2 (resources, academic support and student engagement) and part of B4 (effective assessment). Similarly, the student outcomes B condition will be embedded into the outcomes aspect of the new TEF. This should make even clearer to assessors what is being assessed, where the baseline is and what sits above that line as excellent or outstanding.

    And this in turn should make agreeing upon ratings more straightforward. It was not always clear in the previous TEF round where the lines between Requires Improvement and even meeting basic requirements for the sector should be drawn. This applied only to the very small number of providers whose provision did not appear, to put it plainly, to be good enough.

    But more clarity in the next round about the connection between baseline requirements should aid assessment processes. Clarification that in the future a Bronze award signifies “meeting the minimum quality requirements” is also welcome. Although the sector will need time to adjust to this change, it is in line with the risk-based approach OfS wants to take to the quality system overall.

    The £25,000 question

    Underlying all of the questions being asked by providers now is a fundamental one: How we will do next time?

    Looking at the proposals with my assessor’s hat on, I can’t predict what will happen for individual providers, but it does seem that the evolved approach to awarding ratings should be more transparent and more consistent. Providers need to continue to understand their education-related own data, both quantitative and qualitative, and commit to a whole institutional approach to embedding improvements, working in close partnership with students.

    Assessment panels will continue to take their roles very seriously, to engage fully with agreed criteria, and do everything we can to make a positive contribution to encouraging, recognising and rewarding teaching excellence in higher education.

    Source link

  • TEF6: the incredible machine takes over quality assurance regulation

    TEF6: the incredible machine takes over quality assurance regulation

    If you loved the Teaching Excellence Framework, were thrilled by the outcomes (B3) thresholds, lost your mind for the Equality of Opportunity Risk Register, and delighted to the sporadic risk-based OfS investigations based on years-old data you’ll find a lot to love in the latest set of Office for Students proposals on quality assurance.

    In today’s Consultation on the future approach to quality regulation you’ll find a cyclical, cohort based TEF that also includes a measurement (against benchmarks) of compliance with the thresholds for student outcomes inscribed in the B3 condition. Based on the outcomes of this super-TEF and prioritised based on assessment of risk, OfS will make interventions (including controls on recruitment and the conditions of degree awarding powers) and targeted investigation. This is a first stage consultation only, stage two will come in August 2026.

    It’s not quite a grand unified theory: we don’t mix in the rest of the B conditions (covering less pressing matters like academic standards, the academic experience, student support, assessment) because, in the words of OfS:

    Such an approach would be likely to involve visits to all providers, to assess whether they meet all the relevant B conditions of registration

    The students who are struggling right now with the impacts of higher student/staff ratios and a lack of capacity due to over-recruitment will greatly appreciate this reduction in administrative burden.

    Where we left things

    When we last considered TEF we were expecting an exercise every four years, drawing on provider narrative submissions (which included a chunk on a provider’s own definition and measurement of educational gain), students’ union narrative submissions, and data on outcomes and student satisfactions. Providers were awarded a “medal” for each of student outcomes and student experience – a matrix determined whether this resulted in an overall Bronze, Silver, Gold or Requires Improvement.

    The first three of these awards were deemed to be above minimum standards (with slight differences between each), while the latter was a portal to the much more punitive world of regulation under group B (student experience) conditions of registration. Most of the good bits of this approach came from the genuinely superb Pearce Review of TEF conducted under section 26 of the Higher Education and Research Act, which fixed a lot of the statistical and process nonsense that had crept in under previous iterations and then-current plans (though not every recommendation was implemented).

    TEF awards were last made in 2023, with the next iteration – involving all registered providers plus anyone else who wanted to play along – was due in 2027.

    Perma-TEF

    A return to a rolling TEF rather than a quadrennial quality enhancement jamboree means a pool of TEF assessors rather than a one-off panel. There will be steps taken to ensure that an appropriate group of academic and student assessors is selected to assess each cohort – there will be special efforts made to use those with experience of smaller, specialist, and college-based providers – and a tenure of two-to-three years is planned. OfS is also considering whether its staff can be included among the storied ranks of those empowered to facilitate ratings decisions.

    Likewise, we’ll need a more established appeals system. Open only to those with Bronze or Needs Improvement ratings (Gold and Silver are passing grades) it would be a way to potentially forestall engagement and investigations based on an active risk to student experience or outcomes, or a risk of a future breach of a condition of registration for Bronze or Requires Improvement.

    Each provider would be assessed once every three years – all providers taking part in the first cycle would be assessed in either 2027-28, 2028-29, or 2029-30 (which covers only undergraduate students because there’s no postgraduate NSS yet – OfS plan to develop one before 2030). In many cases they’ll only know which one at the start of the academic year in question, which will give them six months to get their submissions sorted.

    Because Bronze is now bad (rather than “good but not great” as it used to be) the first year’s could well include all providers with a 2023 Bronze (or Requires Improvement) rating, plus some with increased risks of non-compliance, some with Bronze in one of the TEF aspects, and some without a rating.

    After this, how often you are assessed depends on your rating – if you are Gold overall it is five years till the next try, Silver means four years, and Bronze three (if you are “Requires Improvement” you probably have other concerns beyond the date of your next assessment) but this can be tweaked if OfS decides there is an increased risk to quality or for any other reason.

    Snakes and ladders

    Ignore the gradations and matrices in the Pearce Review – the plan now is that your lowest TEF aspect rating (remember you got sub-awards last time for student experience and student outcomes) will be your overall rating. So Silver for experience and Bronze for outcomes makes for an overall Bronze. As OfS has decided that you now have to pay (likely around £25,000) to enter what is a compulsory exercise this is a cost that could lead to a larger cost in future.

    In previous TEFs, the only negative consequence for those outside of the top ratings have been reputational – a loss of bragging rights of, arguably, negligible value. The new proposals align Bronze with the (B3) minimum required standards and put Requires Improvement below these: in the new calculus of value the minimum is not good enough and there will be consequences.

    We’ve already had some hints that a link to fee cap levels is back on the cards, but in the meantime OfS is pondering a cap on student numbers expansion to punish those who turn out Bronze or Requires Improvement. The workings of the expansion cap will be familiar to those who recall the old additional student numbers process – increases of more than five per cent (the old tolerance band, which is still a lot) would not be permitted for poorly rated providers.

    For providers without degree awarding powers it is unlikely they will be successful in applying for them with Bronze and below – but OfS is also thinking about restricting aspects of existing providers DAPs, for example limiting their ability to subcontract or franchise provision in future. This is another de facto numbers cap in many cases, and is all ahead of a future consultation on DAPs that could make for an even closer link with TEF.

    Proposals for progression

    Proposal 6 will simplify the existing B3 thresholds, and integrate the way they are assessed into the TEF process. In a nutshell, the progression requirement for B3 would disappear – with the assessment made purely on continuation and completion, with providers able to submit contextual and historic information to explain why performance is not above the benchmark or threshold as a part of the TEF process.

    Progression will still be considered at the higher levels of TEF, and here contextual information can play more of a part – with what I propose we start calling the Norland Clause allowing providers to submit details of courses that lead to jobs that ONS does not consider as professional or managerial. That existing indicator will be joined by another based on (Graduate Outcomes) graduate reflections on how they are using what they have learned, and benchmarked salaries three years after graduation from DfE’s Longitudinal Educational Outcomes (LEO) data – in deference to that random Kemi Badenoch IFS commission at the tail end of the last parliament.

    Again, there will be contextual benchmarks for these measures (and hopefully some hefty caveating on the use of LEO median salaries) – and, as is the pattern in this consultation, there are detailed proposals to follow.

    Marginal gains, marginal losses

    The “educational gains” experiment, pioneered in the last TEF, is over: making this three times that a regulator in England has tried and failed to include a measure of learning gain in some form of regulation. OfS is still happy for you to mention your education gain work in your next narrative submission, but it isn’t compulsory. The reason: reducing burden, and a focus on comparability rather than a diversity of bespoke measures.

    Asking providers what something means in their context, rather than applying a one-size-fits-all measure of student success was an immensely powerful component of the last exercise. Providers who started on that journey at considerable expense in data gathering and analysis may be less than pleased at this latest development – and we’d certainly understood that DfE were fans of the approach too.

    Similarly, the requirement for students to feed back on students in their submissions to TEF has been removed. The ostensible reason is that students found it difficult last time round – the result is that insight from the valuable networks between existing students and their recently graduated peers is lost. The outcomes end of TEF is now very much data driven with only the chance to explain unusual results offered. It’s a retreat from some of the contextual sense that crept in with the Pearce Review.

    Business as usual

    Even though TEF now feels like it is everywhere and for always, there’s still a place for OfS’ regular risk-based monitoring – and annex I (yes, there’s that many annexes) contains a useful draft monitoring tool.

    Here it is very good to see staff:student ratios, falling entry requirements, a large growth in foundation year provision, and a rapid growth in numbers among what are noted as indicators of risk to the student experience. It is possible to examine an excellent system designed outside of the seemingly inviolate framework of the TEF where events like this would trigger an investigation of provider governance and quality assurance processes.

    Alas, the main use of this monitoring is to decide whether or not to bring a TEF assessment forward, something that punts an immediate risk to students into something that will be dealt with retrospectively. If I’m a student on a first year that has ballooned from 300 to 900 from one cycle to the next there is a lot of good a regulator can do by acting quickly – I am unlikely to care whether a Bronze or Silver award is made in a couple of years’ time.

    International principles

    One of the key recommendations of the Behan review on quality was a drawing together of the various disparate (and, yes, burdensome) streams of quality and standards assurance and enhancement into a unified whole. We obviously don’t quite get there – but there has been progress made towards another key sector bugbear that came up both in Behan and the Lords’ Industry and Regulators Committee review: adherence to international quality assurance standards (to facilitate international partnerships and, increasingly, recruitment).

    OfS will “work towards applying to join the European Quality Assurance Register for Higher Education” at the appropriate time – clearly feeling that the long overdue centring of the student voice in quality assurance (there will be an expanded role for and range of student assessors) and the incorporation of a cyclical element (to desk assessments at least) is enough to get them over the bar.

    It isn’t. Principle 2.1 of the EQAR ESG requires that “external quality assurance should address the effectiveness of the internal quality assurance processes” – philosophically establishing the key role of providers themselves in monitoring and upholding the quality of their own provision, with the external assurance process primarily assessing whether (and how well) this has been done. For whatever reason OfS believes the state (in the form of the regulator) needs to be (and is capable of being!) responsible for all, quality assurance everywhere, all the time. It’s a glaring weakness of the OfS system that urgently needs to be addressed. And it hasn’t been, this time.

    The upshot is that while the new system looks ESG-ish, it is unlikely to be judged to be in full compliance.

    Single word judgements

    The recent use of single headline judgements of educational quality being used in ways that have far reaching regulatory implications is hugely problematic. The government announced the abandonment of the old “requires improvement, inadequate, good, and outstanding” judgements for schools in favour of a more nuanced “report card approach” – driven in part by the death by suicide of headteacher Ruth Perry in 2023. The “inadequate” rating given to her Cavendish Primary School would have meant forced academisation and deeper regulatory oversight.

    Regulation and quality assurance in education needs to be rigorous and reliable – it also needs to be context-aware and focused on improvement rather than retribution. Giving single headline grades cute, Olympics-inspired names doesn’t really cut it – and as we approach the fifth redesign of an exercise that has only run six times since 2016 you would perhaps think that rather harder questions need to be asked about the value (and cost!) of this undertaking.

    If we want to assess and control the risks of modular provision, transnational education, rapid expansion, and a growing number of innovations in delivery we need providers as active partners in the process. If we want to let universities try new things we need to start from a position that we can trust universities to have a focus on the quality of the student experience that is robust and transparent. We are reaching the limits of the current approach. Bad actors will continue to get away with poor quality provision – students won’t see timely regulatory action to prevent this – and eventually someone is going to get hurt.

    Source link

  • Back to the future for the TEF? Back to school for OfS?

    Back to the future for the TEF? Back to school for OfS?

    As the new academic year dawns, there is a feeling of “back to the future” for the Teaching Excellent Framework (TEF).

    And it seems that the Office for Students (OfS) needs to go “back to school” in its understanding of the measurement of educational quality.

    Both of these feelings come from the OfS Chair’s suggestion that the level of undergraduate tuition fees institutions can charge may be linked to institutions’ TEF results.

    For those just joining us on TEF-Watch, this is where the TEF began back in the 2015 Green Paper.

    At that time, the idea of linking tuition fees to the TEF’s measure of quality was dropped pretty quickly because it was, and remains, totally unworkable in any fair and reasonable way.

    This is for a number of reasons that would be obvious to anyone who has a passing understanding of how the TEF measures educational quality, which I wrote about on Wonkhe at the time.

    Can’t work, won’t work

    First, the TEF does not measure the quality of individual degree programmes. It evaluates, in a fairly broad-brush way, a whole institution’s approach to teaching quality and related outcomes. All institutions have programmes of variable quality.

    This means that linking tuition fees to TEF outcomes could lead to significant numbers of students on lower quality programmes being charged the higher rate of tuition fees.

    Second, and even more unjustly, the TEF does not give any indication of the quality of education that students will directly experience.

    Rather, when they are applying for their degree programme, it provides a measure of an institution’s general teaching quality at the time of its last TEF assessment.

    Under the plans currently being considered for a rolling TEF, this could be up to five years previously – which would mean it gives a view of educational quality at least nine years before applicants will graduate. Even if it was from the year before they enrol, it will be based on an assessment of evidence that took place at least four years before they will complete their degree programme.

    Those knowledgeable about educational quality understand that, over such a time span, educational quality could have dramatically changed. Given this, on what basis can it be fair for new students to be charged the higher rate of tuition fees as a result of a general quality of education enjoyed by their predecessors?

    These two reasons would make a system in which tuition fees were linked to TEF outcomes incredibly unfair. And that is before we even consider its impact on the TEF as a valid measure of educational quality.

    The games universities play

    The higher the stakes in the TEF, the more institutions will feel forced to game the system. In the current state of financial crisis, any institutional leader is likely to feel almost compelled to pull every trick in the book in order to ensure the highest possible tuition fee income for their institution.

    How could they not given that it could make the difference between institutional survival, a forced merger or the potential closure of their institution? This would make the TEF even less of an effective measure of educational quality and much more of a measure of how effectively institutions can play the system.

    It takes very little understanding of such processes to see that institutions with the greatest resources will be in by far the best position to finance the playing of such games. Making the stakes so high for institutions would also remove any incentive for them to use the TEF as an opportunity to openly identify educational excellence and meaningfully reflect on their educational quality.

    This would mean that the TEF loses any potential to meet its core purpose, identified by the Independent Review of the TEF, “to identify excellence and encourage enhancement”. It will instead become even more of a highly pressurised marketing exercise with the TEF outcomes having potentially profound consequences for the future survival of some institutions.

    In its own terms, the suggestion about linking undergraduate tuition fees to TEF outcomes is nothing to worry about. It simply won’t happen. What is a much greater concern is that the OfS is publicly making this suggestion at a time when it is claiming it will work harder to advocate for the sector as a force for good, and also appears to have an insatiable appetite to dominate the measurement of educational quality in English higher education.

    Any regulator that had the capacity and expertise to do either of these things would simply not be making such a suggestion at any time but particularly not when the sector faces such a difficult financial outlook.

    An OfS out of touch with its impact on the sector. Haven’t we been here before?

    Source link

  • Is peer review of teaching stuck in the past?

    Is peer review of teaching stuck in the past?

    Most higher education institutions awarded gold for the student experience element of their 2023 Teaching Excellence Framework (TEF) submissions mentioned peer review of teaching (PRT).

    But a closer look at what they said will leave the reader with the strong impression that peer review schemes consume lots of time and effort for no discernible impact on teaching quality and student experience.

    What TEF showed us

    Forty out of sixty providers awarded gold for student experience mention PRT, and almost all of these (37) called it “observation.” This alone should give pause for thought: the first calls to move beyond observation towards a comprehensive process of peer review appeared in 2005 and received fresh impetus during the pandemic (see Mark Campbell’s timely Wonkhe comment from March 2021). But the TEF evidence is clear: the term and the concept not only persist, but appear to flourish.

    It gets worse: only six institutions (that’s barely one in ten of the sector’s strongest submissions) said they measure engagement with PRT or its impact, and four of those six are further education (FE) colleges providing degree-level qualifications. Three submissions (one is FE) showed evidence of using PRT to address ongoing challenges (take a bow, Hartpury and Plymouth Marjon universities), and only five institutions (two are FE) showed any kind of working relationship between PRT and their quality assurance processes.

    Scholarship shows that thoughtfully implemented peer review of teaching can benefit both the reviewer and the reviewed but that it needs regular evaluation and must adapt to changing contexts to stay relevant. Sadly, only eleven TEF submissions reported that their respective PRT schemes have adapted to changing contexts via steps such as incorporating the student voice (London Met), developing new criteria based on emerging best practice in areas such as inclusion (Hartpury again), or a wholesale revision of their scheme (St Mary’s Twickenham).

    The conclusion must be that providers spend a great deal of time and effort (and therefore money) on PRT without being able to explain why they do it, show what value they get from it, or even ponder its ongoing relevance. And when we consider that many universities have PRT schemes but didn’t mention them, the scale of expenditure on this activity will be larger than represented by the TEF, and the situation will be much worse than we think.

    Why does this matter?

    This isn’t just about getting a better return on time and effort; it’s about why providers do peer review of teaching at all, because no-one is actually required to do it. The OfS conditions of registration require higher education institutions to “provide evidence that all staff have opportunities to engage in reflection and evaluation of their learning, teaching, and assessment practice”.

    Different activities can meet the OfS stipulation, such as team teaching, formal observations for AdvanceHE Fellowship, teaching network discussions, microteaching within professional development settings. Though not always formally categorised within institutional documentation, these nevertheless form part of the ecosystem under which people seek or engage with review from peers and represent forms of peer-review adjacent practice which many TEF submissions discussed at greater length and with more confidence than PRT itself.

    So higher education institutions invest time and effort in PRT but fail to explain the benefits of their reasoning, and appear to derive greater value from alternative activities that satisfy the OfS. Yet PRT persists. Why?

    What brought us to this point?

    Many providers will find that their PRT schemes were started or incorporated into their institutional policies around the millennium. Research from Crutchley and colleagues identified Brenda Smith’s HEFCE-funded project at Nottingham Trent in the late 1990s as a pioneering moment in establishing PRT as part of the UK landscape, following earlier developments in Australia and the US. Research into PRT gathered pace in the early 2000s and reached a (modest) peak in around 2005, and then tailed off.

    PRT is the Bovril of the education cupboard. We’re pretty sure it does some good, though no one is quite sure how, and we don’t have time to look it up. We use it maybe once a year and are comforted by its presence, even though its best before date predates the first smartphones, and its nutritional value is now less than the label that bears its name. The prospect of throwing it out induces an existential angst – “am I a proper cook without it?” – and yes of course we’d like to try new recipes but who has the time to do that?

    Australia shows what is possible

    There is much to be learnt from looking outside our own borders, on how peer review has evolved in other countries. In Australia, the 2024 Universities Accord offered 47 recommendations as part of a federally funded vision for tertiary education reform for 2050. The Accord was reviewed on Wonkhe in March 2024.

    One of its recommendations advocates for the “increased, systematised use of peer review of teaching” to improve teaching quality, insisting this “should be underpinned by evidence of effective and efficient methodologies which focus on providing actionable feedback to teaching staff.” The Accord even suggested these processes could be used to validate existing national student satisfaction surveys.

    Some higher education institutions, such as The University of Sydney, had already anticipated this direction, having revised their peer review processes with sector developments firmly in mind a few years ahead of the Accord’s formal recommendations. A Teaching@Sydney blog post from March 2023 describes how the process uses a pool of centrally trained and accredited expert reviewers, standardised documentation aligned with contemporary, evidence-based teaching principles, and cross-disciplinary matching processes that minimise conflicts of interest, while intentionally integrating directly with continuing professional development pathways and fellowship programs. This creates a sustainable ecosystem of teaching enhancement rather than isolated activities, meaning the Bovril is always in use rather than mouldering behind Christmas’s leftover jar of cranberry sauce.

    Lessons for the UK

    Comparing Australia and the UK draws out two important points. First, Australia has taken the simple but important step of saying PRT has a role in realising an ambitious vision for HE. This has not happened in the UK. In 2017 an AdvanceHE report said that “the introduction and focus of the Teaching Excellence Framework may see a renewed focus on PRT” but clearly this has not come to pass.

    In fact, the opposite is true, because the majority of TEF Summary Statements were silent on the matter of PRT, and there seemed to be some inconsistency in judgments in those instances where the reviewers did say something. In the absence of any explanation it is hard to understand why they might commend the University of York’s use of peer observation on a PG Cert for new staff, but judge that the University of West London meeting their self-imposed target of 100 per cent completion of teaching observations every two years for all academic permanent staff members was “insufficient evidence of high-quality practice.”

    Australia’s example sounds rather top-down, but it’s sobering to realise that they are probably achieving more impact for the cost of less time and effort than their UK colleagues, if the TEF submissions are anything to go by.

    And Australia is clear-sighted about how PRT needs to be implemented for it to work effectively, and how it can be joined up with measures such as student satisfaction surveys that have emerged since PRT first appeared over thirty years ago. Higher education institutions such as Sydney have been making deliberate choices about how to do PRT and how to integrate it with other management, development and recognition processes – an approach that has informed and been validated by the Universities Accord’s subsequent recommendations.

    Where now for PRT?

    UK providers can follow Sydney’s example by integrating their PRT schemes with existing professional development pathways and criteria, and a few have already taken that step. The FE sector affords many examples of using different peer review methods, such as learning walks and coaching in combination. University College London’s recent light refresh of its PRT scheme shows that management and staff alike welcome choice.

    A greater ambition than introducing variety would be to improve reporting of program design and develop validated tools to assess outcomes. This would require significant work and sponsorship from a body such as AdvanceHE, but would yield stronger evidence about PRT’s value for supporting teaching development, and underpin meaningful evaluation of practice.

    This piece is based on collaborative work between University College London and the University of Sydney examining peer review of teaching processes across both institutions. It was contributed by Nick Grindle, Samantha Clarke, Jessica Frawley, and Eszter Kalman.

    Source link