Tag: TEF

  • WEEKEND READING: Three reasons why the TEF will collapse under the weight of OfS  and DfE expectations

    WEEKEND READING: Three reasons why the TEF will collapse under the weight of OfS  and DfE expectations

    Author:
    Professor Paul Ashwin

    Published:

    This blog was kindly authored by Paul Ashwin, Professor of Higher Education, Lancaster University.

    The Office for Students (OfS) and the Department of Education (DfE) have big plans to make the TEF much more consequential. They want future TEF outcomes to determine whether institutions can increase their intake of students and their undergraduate tuition fees in line with inflation, which could mean the difference between survival or merger/closure for many institutions. These plans require that the OfS to show that the TEF provides a credible measure of institutional educational quality, whilst also fulfilling the OfS’s central remit of acting in the interest of students. The OfS consultation on the future approach to quality regulation provides an opportunity to assess the OfS’s latest attempt at such a justification. To say it looks weak is a huge understatement. Rather, unless there is a radical rethink, these proposals will lead to the collapse of the TEF.

    There are three reasons why this collapse would be inevitable.

    Firstly, the TEF provides a broad, if flawed, measure of institutional educational quality. This was fine when the main consequence of a TEF award was the presence or absence of a marketing opportunity for institutions. However, if the TEF has existential consequences for institutions, then a whole series of limitations are suddenly cast in a deeply unflattering spotlight. The most obvious of these is that the TEF uses programme level metrics to make judgements about institutional quality. It is both conceptual and methodological nonsense to attempt to scale-up judgements of quality from the programme to the institutional level in this way, as has been routinely stated in every serious review of the National Student Survey. This didn’t matter too much when the TEF was lacking in teeth, but if it has profound consequences, then why wouldn’t institutions consider legal challenges to this obvious misuse of metrics? This situation is only exacerbated by the OfS’s desire to extend the TEF to all institutions regardless of size. The starkest consequence of this foolhardy venture is that a small provider with insufficient student experience and outcomes data could end up being awarded TEF Gold (and the ability to increase student recruitment and tuition fees in line with inflation) on the basis of a positive student focus group and an institutional statement. How might larger institutions awarded a Bronze TEF react to such obvious unfairness? That the OfS has put itself in this position shows how little it understands the consequences of what it is proposing.

    Second, in relation to the OfS acting in the student interest, things look even worse. As the TEF attempts to judge quality at an institutional level, it does not give any indication of the quality of the particular programme a student will directly experience. As the quality of degree programmes varies across all institutions, students on, for example, a very high quality psychology degree in an institution with TEF Bronze would pay lower tuition fees than students on a demonstrably much lower quality psychology degree in an institution that is awarded TEF Gold. How can this possibly be in the student interest? Things get even worse when we consider the consequences of TEF awards being based on data that will be between four and ten years out of date by the time students graduate. For example, let’s imagine a student who was charged higher tuition fees based on a TEF Gold award, whose institution gets downgraded to a TEF Bronze in the next TEF. Given this lower award would be based on data from the time the student was actually studying at the institution, how, in the name of the student interest, would students not be eligible for a refund for the inflation-linked element of their tuition fee?

    Thirdly, the more consequential that the TEF becomes, the more pressure is put on it as a method of quality assessment. This would have predictable and damaging effects. If TEF panels know that being awarded TEF Bronze could present an existential threat to institutions, then they are likely to be incredibly reluctant to make such an award. It is not clear how the OfS could prevent this without inappropriately and illegitimately intervening in the work of the expert panels.  Also, in the current state of financial crisis, institutional leaders are likely to feel forced to game the TEF. This would make the TEF even less of an effective measure of educational quality and much more of a measure of how effectively institutions can play the system. It is totally predictable that institutions with the greatest resources will be in by far the best position to finance the playing of such games.

    The OfS and DfE seem determined to push ahead with this madness, a madness which incidentally goes completely against the widely lauded recommendations of the TEF Independent Review. Their response to the kinds of issues discussed here appears to be to deny any responsibility by asking, “What’s the alternative?” But there are much more obvious options than using a broad brush mechanism of institutional quality to determine whether an institution can recruit more students and raise its undergraduate tuition fees in line with inflation. For example, it would make more sense and be more transparent to all stakeholders, if these decisions were based on ‘being in good standing’ with the regulator based on a public set of required standards. This would also allow the OfS to take much swifter action against problematic providers than using a TEF-based assessment process. However things develop from here, one thing is certain: if the OfS and DfE cannot find a different way forward, then the TEF will soon collapse under the weight of expectations it cannot possibly meet.

    Source link

  • From improvement to compliance – a significant shift in the purpose of the TEF

    From improvement to compliance – a significant shift in the purpose of the TEF

    The Teaching Excellence Framework has always had multiple aims.

    It was partly intended to rebalance institutional focus from research towards teaching and student experience. Jo Johnson, the minister who implemented it, saw it as a means of increasing undergraduate teaching resources in line with inflation.

    Dame Shirley Pearce prioritised enhancing quality in her excellent review of TEF implementation. And there have been other purposes of the TEF: a device to support regulatory interventions where quality fell below required thresholds, and as a resource for student choice.

    And none of this should ignore its enthusiastic adoption by student recruitment teams as a marketing tool.

    As former Chair and Deputy Chair of the TEF, we are perhaps more aware than most of these competing purposes, and more experienced in understanding how regulators, institutions and assessors have navigated the complexity of TEF implementation. The TEF has had its critics – something else we are keenly aware of – but it has had a marked impact.

    Its benchmarked indicator sets have driven a data-informed and strategic approach to institutional improvement. Its concern with disparities for underrepresented groups has raised the profile of equity in institutional education strategies. Its whole institution sweep has made institutions alert to the consequences of poorly targeted education strategies and prioritised improvement goals. Now, the publication of the OfS’s consultation paper on the future of the TEF is an opportunity to reflect on how the TEF is changing and what it means for the regulatory and quality framework in England.

    A shift in purpose

    The consultation proposes that the TEF becomes part of what the OfS sees as a more integrated quality system. All registered providers will face TEF assessments, with no exemptions for small providers. Given the number of new providers seeking OfS registration, it is likely that the number to be assessed will be considerably larger than the 227 institutions in the 2023 TEF.

    Partly because of the larger number of assessments to be undertaken, TEF will move to a rolling cycle, with a pool of assessors. Institutions will still be awarded three grades – one for outcomes, one for experience and one overall, but their overall grade will simply be the lower of the two other grades. The real impact of this will be on Bronze-rated providers who could find themselves subject to a range of measures, potentially including student number controls or fee constraints, until they show improvement.

    The OfS consultation paper marks a significant shift in the purpose of the TEF, from quality enhancement to regulation and from improvement to compliance. The most significant changes are at the lower end of assessed performance. The consultation paper makes sensible changes to aspects of the TEF which always posed challenges for assessors and regulators, tidying up the relationship between the threshold B3 standards and the lowest TEF grades. It correctly separates measures of institutional performance on continuation and completion – over which institutions have more direct influence – from progression to employment – over which institutions have less influence.

    Pressure points

    But it does this at some heavy costs. By treating the Bronze grade as a measure of performance at, rather than above, threshold quality, it will produce just two grades above the threshold. In shifting the focus towards quantitative indicators and away from institutional discussion of context, it will make TEF life more difficult for further education institutions and institutions in locations with challenging graduate labour markets. The replacement of the student submission with student focus groups may allow more depth on some issues, but comes at the expense of breadth, and the student voice is, disappointingly, weakened.

    There are further losses as the regulatory purpose is embedded. The most significant is the move away from educational gain, and this is a real loss: following TEF 2023, almost all institutions were developing their approaches to and evaluation of educational gain, and we have seen many examples where this was shaping fruitful approaches to articulating institutional goals and the way they shape educational provision.

    Educational gain is an area in which institutions were increasingly thinking about distinctiveness and how it informs student experience. It is a real loss to see it go, and it will weaken the power of many education strategies. It is almost certainly the case that the ideas of educational gain and distinctiveness are going to be required for confident performance at the highest levels of achievement, but it is a real pity that it is less explicit. Educational gain can drive distinctiveness, and distinctiveness can drive quality.

    Two sorts of institutions will face the most significant challenges. The first, obviously, are providers rated Bronze in 2023, or Silver-rated providers whose indicators are on a downward trajectory. Eleven universities were given a Bronze rating overall in the last TEF exercise – and 21 received Bronze either for the student experience or student outcomes aspects. Of the 21, only three Bronzes were for student outcomes, but under the OfS plans, all would be graded Bronze, since any institution would be given its lowest aspect grade as its overall grade. Under the proposals, Bronze-graded institutions will need to address concerns rapidly to mitigate impacts on growth plans, funding, prestige and competitive position.

    The second group facing significant challenges will be those in difficult local and regional labour markets. Of the 18 institutions with Bronze in one of the two aspects of TEF 2023, only three were graded bronze for student outcomes, whereas 15 were for student experience. Arguably this was to be expected when only two of the six features of student outcomes had associated indicators: continuation/completion and progression.

    In other words, if indicators were substantially below benchmark, there were opportunities to show how outcomes were supported and educational gain was developed. Under the new proposals, the approach to assessing student outcomes is largely, if not exclusively, indicator-based, for continuation and completion. The approach is likely to reinforce differences between institutions, and especially those with intakes from underrepresented populations.

    The stakes

    The new TEF will play out in different ways in different parts of the sector. The regulatory focus will increase pressure on some institutions, whilst appearing to relieve it in others. For those institutions operating at 2023 Bronze levels or where 2023 Silver performance is declining, the negative consequences of a poor performance in the new TEF, which may include student number controls, will loom large in institutional strategy. The stakes are now higher for these institutions.

    On the other hand, institutions whose graduate employment and earnings outcomes are strong, are likely to feel more relieved, though careful reading of the grade specifications for higher performance suggests that there is work to be done on education strategies in even the best-performing 2023 institutions.

    In public policy, lifting the floor – by addressing regulatory compliance – and raising the ceiling – by promoting improvement – at the same time is always difficult, but the OfS consultation seems to have landed decisively on the side of compliance rather than improvement.

    Source link

  • An assessor’s perspective on the Office for Students’ TEF shake-up

    An assessor’s perspective on the Office for Students’ TEF shake-up

    Across the higher education sector in England some have been waiting with bated breath for details of the proposed new Teaching Excellence Framework. Even amidst the multilayered preparations for a new academic year – the planning to induct new students, to teach well and assess effectively, to create a welcoming environment for all – those responsible for education quality have had one eye firmly on the new TEF.

    The OfS has now published its proposals along with an invitation to the whole sector to provide feedback on them by 11 December 2025. As an external adviser for some very different types of provider, I’m already hearing a kaleidoscope of changing questions from colleagues. When will our institution or organisation next be assessed if the new TEF is to run on a rolling programme rather than in the same year for everyone? How will the approach to assessing us change now that basic quality requirements are included alongside the assessment of educational ‘excellence’? What should we be doing right now to prepare?

    Smaller providers, including further education colleges that offer some higher education programmes, have not previously been required to participate in the TEF assessment. They will now all need to take part, so have a still wider range of questions about the whole process. How onerous will it be? How will data about our educational provision, both quantitative and qualitative, be gathered and assessed? What form will our written submission to the OfS need to take? How will judgements be made?

    As a member of TEF assessment panels through TEF’s entire lifecycle to date, I’ve read the proposals with great interest. From an assessor’s point of view, I’ve pondered on how the assessment process will change. Will the new shape of TEF complicate or help streamline the assessment process so that ratings can be fairly awarded for providers of every mission, shape and size?

    Panel focus

    TEF panels have always comprised experts from the whole sector, including academics, professional staff and student representatives. We have looked at the evidence of “teaching excellence” (I think of it as good education) from each provider very carefully. It makes sense that the two main areas of assessment, or “aspects” – student experience and student outcomes – will continue to be discrete areas of focus, leading to two separate ratings of either Gold, Silver, Bronze or Requires Improvement. That’s because the data for each of these can differ quite markedly within a single provider, so it can mislead students to conflate the two judgements.

    Diagram from page 18 of the consultation document

    Another positive continuity is the retention of both quantitative and qualitative evidence. Quantitative data include the detailed datasets provided by OfS, benchmarked against the sector. These are extremely helpful to assessors who can compare the experiences and outcomes of students from different demographics across the full range of providers.

    Qualitative data have previously come from 25-page written submissions from each provider, and from written student submissions. There are planned changes afoot for both of these forms of evidence, but they will still remain crucial.

    The written provider submissions may be shorter next time. Arguably there is a risk here, as submissions have always enabled assessors to contextualise the larger datasets. Each provider has its own story of setting out to make strategic improvements to their educational provision, and the submissions include both qualitative narrative and internally produced quantitative datasets related to the assessment criteria, or indicators.

    However, it’s reasonable for future submissions to be shorter as the student outcomes aspect will rely upon a more nuanced range of data relating to study outcomes as well as progression post-study (proposal 7). While it’s not yet clear what the full range of data will be, this approach is potentially helpful to assessors and to the sector, as students’ backgrounds, subject fields, locations and career plans vary greatly and these data take account of those differences.

    The greater focus on improved datasets suggests that there will be less reliance on additional information, previously provided at some length, on how students’ outcomes are being supported. The proof of the pudding for how well students continue with, complete and progress from their studies is in the eating, or rather in the outcomes themselves, rather than the recipes. Outcomes criteria should be clearer in the next TEF in this sense, and more easily applied with consistency.

    Another proposed change focuses on how evidence might be more helpfully elicited from students and their representatives (proposal 10). In the last TEF students were invited to submit written evidence, and some student submissions were extremely useful to assessors, focusing on the key criteria and giving a rounded picture of local improvements and areas for development. For understandable reasons, though, students of some providers did not, or could not, make a submission; the huge variations in provider size means that in some contexts students do not have the capacity or opportunity to write up their collective experiences. This variation was challenging for assessors, and anything that can be done to level the playing field for students’ voices next time will be welcomed.

    Towards the data limits

    Perhaps the greatest challenge for TEF assessors in previous rounds arose when we were faced with a provider with very limited data. OfS’s proposal 9 sets out to address this by varying the assessment approach accordingly. Where these is no statistical confidence in a provider’s NSS data (or no NSS data at all), direct evidence of students’ experiences with that provider will be sought, and where there is insufficient statistical confidence in a provider’s student outcomes, no rating will be awarded for that aspect.

    The proposed new approach to the outcomes rating makes great sense – it is so important to avoid reaching for a rating which is not supported by clear evidence. The plan to fill any NSS gap with more direct evidence from students is also logical, although it could run into practical challenges. It will be useful to see suggestions from the sector about how this might be achieved within differing local contexts.

    Finally, how might assessment panels be affected by changes to what we are assessing, and the criteria for awarding ratings? First, both aspects will incorporate the requirements of OfS’s B conditions – general ongoing, fundamental conditions of registration. The student experience aspect will now be aligned with B1 (course content and delivery), B2 (resources, academic support and student engagement) and part of B4 (effective assessment). Similarly, the student outcomes B condition will be embedded into the outcomes aspect of the new TEF. This should make even clearer to assessors what is being assessed, where the baseline is and what sits above that line as excellent or outstanding.

    And this in turn should make agreeing upon ratings more straightforward. It was not always clear in the previous TEF round where the lines between Requires Improvement and even meeting basic requirements for the sector should be drawn. This applied only to the very small number of providers whose provision did not appear, to put it plainly, to be good enough.

    But more clarity in the next round about the connection between baseline requirements should aid assessment processes. Clarification that in the future a Bronze award signifies “meeting the minimum quality requirements” is also welcome. Although the sector will need time to adjust to this change, it is in line with the risk-based approach OfS wants to take to the quality system overall.

    The £25,000 question

    Underlying all of the questions being asked by providers now is a fundamental one: How we will do next time?

    Looking at the proposals with my assessor’s hat on, I can’t predict what will happen for individual providers, but it does seem that the evolved approach to awarding ratings should be more transparent and more consistent. Providers need to continue to understand their education-related own data, both quantitative and qualitative, and commit to a whole institutional approach to embedding improvements, working in close partnership with students.

    Assessment panels will continue to take their roles very seriously, to engage fully with agreed criteria, and do everything we can to make a positive contribution to encouraging, recognising and rewarding teaching excellence in higher education.

    Source link

  • Back to the future for the TEF? Back to school for OfS?

    Back to the future for the TEF? Back to school for OfS?

    As the new academic year dawns, there is a feeling of “back to the future” for the Teaching Excellent Framework (TEF).

    And it seems that the Office for Students (OfS) needs to go “back to school” in its understanding of the measurement of educational quality.

    Both of these feelings come from the OfS Chair’s suggestion that the level of undergraduate tuition fees institutions can charge may be linked to institutions’ TEF results.

    For those just joining us on TEF-Watch, this is where the TEF began back in the 2015 Green Paper.

    At that time, the idea of linking tuition fees to the TEF’s measure of quality was dropped pretty quickly because it was, and remains, totally unworkable in any fair and reasonable way.

    This is for a number of reasons that would be obvious to anyone who has a passing understanding of how the TEF measures educational quality, which I wrote about on Wonkhe at the time.

    Can’t work, won’t work

    First, the TEF does not measure the quality of individual degree programmes. It evaluates, in a fairly broad-brush way, a whole institution’s approach to teaching quality and related outcomes. All institutions have programmes of variable quality.

    This means that linking tuition fees to TEF outcomes could lead to significant numbers of students on lower quality programmes being charged the higher rate of tuition fees.

    Second, and even more unjustly, the TEF does not give any indication of the quality of education that students will directly experience.

    Rather, when they are applying for their degree programme, it provides a measure of an institution’s general teaching quality at the time of its last TEF assessment.

    Under the plans currently being considered for a rolling TEF, this could be up to five years previously – which would mean it gives a view of educational quality at least nine years before applicants will graduate. Even if it was from the year before they enrol, it will be based on an assessment of evidence that took place at least four years before they will complete their degree programme.

    Those knowledgeable about educational quality understand that, over such a time span, educational quality could have dramatically changed. Given this, on what basis can it be fair for new students to be charged the higher rate of tuition fees as a result of a general quality of education enjoyed by their predecessors?

    These two reasons would make a system in which tuition fees were linked to TEF outcomes incredibly unfair. And that is before we even consider its impact on the TEF as a valid measure of educational quality.

    The games universities play

    The higher the stakes in the TEF, the more institutions will feel forced to game the system. In the current state of financial crisis, any institutional leader is likely to feel almost compelled to pull every trick in the book in order to ensure the highest possible tuition fee income for their institution.

    How could they not given that it could make the difference between institutional survival, a forced merger or the potential closure of their institution? This would make the TEF even less of an effective measure of educational quality and much more of a measure of how effectively institutions can play the system.

    It takes very little understanding of such processes to see that institutions with the greatest resources will be in by far the best position to finance the playing of such games. Making the stakes so high for institutions would also remove any incentive for them to use the TEF as an opportunity to openly identify educational excellence and meaningfully reflect on their educational quality.

    This would mean that the TEF loses any potential to meet its core purpose, identified by the Independent Review of the TEF, “to identify excellence and encourage enhancement”. It will instead become even more of a highly pressurised marketing exercise with the TEF outcomes having potentially profound consequences for the future survival of some institutions.

    In its own terms, the suggestion about linking undergraduate tuition fees to TEF outcomes is nothing to worry about. It simply won’t happen. What is a much greater concern is that the OfS is publicly making this suggestion at a time when it is claiming it will work harder to advocate for the sector as a force for good, and also appears to have an insatiable appetite to dominate the measurement of educational quality in English higher education.

    Any regulator that had the capacity and expertise to do either of these things would simply not be making such a suggestion at any time but particularly not when the sector faces such a difficult financial outlook.

    An OfS out of touch with its impact on the sector. Haven’t we been here before?

    Source link

  • The Office for Students reviews TEF… again

    The Office for Students reviews TEF… again

    The Office for Students has been evaluating the last iteration of the Teaching Excellence Framework (TEF), which happened in 2023.

    The 2023 TEF was a very different beast to previous iterations, focusing more on qualitative (submissions from providers and students) evidence and less on the quantitative experience and output measures. But to be clear, this work does not appear to assess the impact or likely effects of these changes – it treats the 2023 exercise very much as a one off event.

    We get an independent evaluation report, written by IFF research. There’s the findings of a survey of students involved in preparing the student submissions (aspects of which contribute to a student guide to evidence collection for TEF), findings from a survey of applicants (conducted with Savanta), and an analysis of the estimated costs to the sector of TEF2023. The whole package is wrapped up with a summary blog post, from OfS TEF supremo Graeme Rosenberg.

    Of all this, the blog post is the only bit that touches on what most of us probably care about – the future of the TEF, and the wider idea of the “integrated quality system”. Perhaps predictably, OfS has heard that it should

    “build on the elements of the TEF that worked well and improve on areas that worked less well for some providers.

    The top-line summary of everything else is that OfS is pleased that TEF seems to be driving change in institutions, particularly where it is driven by student perspectives. There’s less confidence that the TEF outcomes are useful for prospective students – the regulator wants to explore this as a part of a wider review of information provision. And while institutions do find TEF valuable, the cost involved in participation is considerable.

    How much does TEF cost then?

    It cost OfS £3.4m, and the mean estimate for costs to the wider sector was £9.96m. That’s about £13.4m in total but with fairly hefty error bars.

    What else could the taxpayer buy for £13.4m? There’s the much-needed Aylesbury link road, an innovation hub in Samlesbury near the new National Cyber Force headquarters (promising jobs paying upwards of £3,000 according to the headline), or enough money to keep Middlesbrough Council solvent for a while. In the higher education world, it’s equivalent to a little under 1,450 undergraduate annual tuition fees.

    The sector numbers come from a survey involving 32.3 per cent of providers (73: 52 higher education providers, 21 FE colleges) involved in the 2023 TEF conducted in September and October 2024 (so significantly after the event). It looked at both staff costs and non-staff costs (stuff like consultancy fees).

    As you’d probably expect, costs and time commitments vary widely by institution – one provider spent 30 staff days on the exercise, while for another it was 410 (the median? 91.6). Likewise, there was variation in the seniority of staff involved – one institution saw senior leaders spend a frankly astonishing 120 days on the TEF. Your median higher education provider spent an estimated £37,400 on the exercise (again, huge error bars here). It is asserted that Gold rated providers spent slightly more than Silver rated providers – the data is indicative at best, and OfS is careful not to assert causality.

    We also get information on the representations process – the mechanism by which providers could appeal their TEF rating. The sample size here is necessarily tiny: 11 higher education providers, 8 colleges – we are given a median of £1,400 for colleges and £4,400 for higher education providers.

    Was it worth it?

    The picture painted by the independent IFF evaluation is positive about the TEF’s role in driving “continuous improvement and excellence” at providers. The feeling was that it had encouraged a greater use of data and evidence in decision making – but in some cases these positive impacts were negligible given the volume of the input required. Students were also broadly positive, citing limited but positive impacts.

    The evaluation also made it clear that the TEF was burdensome – a large drain on available staff or student resource. However, it was generally felt that the TEF was “worth” the burden – and there was a broad satisfaction about the guidance and support offered by OfS during the process (although as you might expect, people generally wanted more examples of “good” submissions – and the “woolly” language around learning gain was difficult to deal with, even though the purpose was to drive autonomous reflection on measures that made sense in a provider context).

    One of the big 2023 cycle innovations was a larger role for the student submission – seen as a way to centre the student perspective within TEF assessment. This wasn’t as successful as OfS may have hoped – responses were split as to whether the process had “empowered the student voice” or not – the bigger institutions tended to see it as replicating pre-existing provider level work.

    Students themselves (not many of them, there were 20 interviews of students involved in preparing the submissions) saw this empowerment as being limited – greater student involvement in quality systems was good, but largely the kind of things that a good provider should be doing anyway.

    But the big question, the overall purpose, really needs to be whether TEF2023 raised the value of the student experience and outcomes. And the perspective on this was… mixed. Commonly TEF complemented other ongoing work in this area, making it difficult to pick out improvements that were directly linked to TEF, or even to this particular TEF. Causality – it’s difficult.

    If we are going to have a big, expensive, exercise like TEF it is important to point to tangible benefits from it. Again, evidence isn’t quite there. About half of the providers surveyed used TEF (as a process or as a set of outputs including the “medals” and the feedback) to inform decision making and planning – but there were limited examples of decisions predicated on TEF offered. And most student representatives were unable to offer evidence of any change as a result of TEF.

    Finally, I was gratified to note that coverage in “sector publications like Wonkhe” was one key way of sharing good practice around TEF submissions.

    The value to applicants

    Any attempt within the sector to provide a better experience for, or better outcomes for students is surely to be welcomed. However, for a large and spendy intervention the evidence for a direct contribution is limited. This is perhaps not surprising – there have been numerous attempts to improve student experience and outcomes even since the birth of the OfS: by the regulator itself, by other sector bodies with an interest in the student experience (the Quality Assurance Agency, Advance HE, the sector representative bodies and so forth) and autonomously by institution or parts of institutions.

    Somewhat curiously, the main evaluation document has little to say about the realisation of TEF’s other main proposed benefit – supporting applicants in choosing a provider to study at. Providers themselves are unsure of the value of TEF here (feeling that it was unlikely that applicants would understand TEF or be able to place due weight on the findings of TEF) though there is some suggestion that a “halo effect”, drawing in part from the liberal use of logos and that job lot of gold paint, could help present a positive image of the provider. It is a hell of a reach, but some noted that the fact that institutional marketing and recruitment efforts used TEF and the logos presents evidence that someone, somewhere, thinks it might work.

    The thing to do here would be to ask applicants – which OfS commissioned Savanta to do on its behalf as a separate exercise. This research was based on six focus groups covering 35 prospective students aged between 17 and 20 and applying to England. In four of these groups, participants had heard of the TEF – in two they had not – and in every case the applicants had ended up applying to silver rated universities.

    This is backed up by what initially looks like a decent survey instrument – a big (2,599 respondents, covering various existing online panels, and weighted via the use of quotas on age, gender, ethnicity and post fieldwork by provider type, mode of study, domicile, and neighbourhood participation marker) survey conducted in April and May of 2024. The headline finding here is that 41.7 per cent of applicants (n=798) had seen TEF ratings for any university they had looked at.

    Somewhat mystifyingly, the survey then focuses entirely on the experience of those 333 applicants in using the TEF information, before asking whether applicants may think TEF would be important in applying to university of the whole sample (52.2 per cent reckoned they would be important, despite a fair number of these applicants not having even noticed the ratings).

    Can I just stop here and say this is a weird methodology? I was expecting a traditional high n survey of applicants, asked to rate the importance of various factors on application choices, ideally with no prompting. This would give a clearer picture of the current value of TEF for such decisions, which is what you would expect in evaluation. That’s not to say that the focus groups or a specific awareness or use survey wouldn’t be a valid contribution to a proper mixed methods analysis – or as a means of generating a survey instrument for wider use.

    Even so, participants in the focus groups were happy to list the factors that affected their choices – these included the obvious winners like location, course content, and graduate outcomes, plus a “significant role” for the cost of living. Secondary (less important) factors included university reputation, teaching quality, and other personal preferences. Though some of these factors are covered within the TEF exercise, not one single applicant mentioned TEF results as a primary or secondary factor.

    For those that had heard of TEF it was seen as a “confirmatory tool rather than a decisive factor.” Applicants did not understand how TEF ratings were determined, the criteria used, or what the meaning of – say – gold rather than silver meant when comparing providers.

    The focus groups chucked the supplementary information (panel statements, submissions, the data dashboard) at applicants – they tended to quite like the student statements (viewing these as authentic), but saw the whole lot as lengthy, overcomplicated, and lacking in specificity.

    I enjoyed this comment on the TEF data dashboards:

    I feel like there is definitely some very useful information on this page, but it’s quite hard to figure out what any of it means.

    On the main ratings themselves, participants were clear that gold or silver probably pointed to a “high standard of education,” but the sheer breadth of the assessments and the lack of course level judgements made the awards less useful.

    There was, in other words, a demand for course specific information. Not only did applicants not mention Discover Uni (a government funded service that purports to provide course level data on student outcomes and the student experience), the report as a whole did not mention that it even existed. Oh dear.

    Unlike IFF, Savanta made some recommendations. There needs to be better promotion of the TEF to applicants, clearer ratings and rationales, and a more concise and direct presentation of additional information. Which is nice.

    What to make of it all

    Jim will be looking at the student submission aspects in more detail over on the SUs site, but even this first reading of the evaluation documents does not offer many hints on the future of the TEF. In many ways it is what you would expect, TEF has changed mainly when OfS decided it should, or when (as with the Pearce review) the hand of the regulator is forced.

    While providers are clearly making the best of TEF as a way to keep the focus on the student experience (as, to be clear, one stimulus among many), it is still difficult to see a way in which the TEF we have does anything to realise the benefits proposed way back in the 2015 Conservative manifesto – to “recognise universities offering the highest teaching quality” and to allow “potential students to make decisions informed by the career paths of past graduates.”

    Source link

  • To stick or pivot? TEF 3.0 and the future of quality

    To stick or pivot? TEF 3.0 and the future of quality

    • Stephanie Marshall is Vice-Principal (Education) at Queen Mary University of London. She is the author of the forthcoming Strategic Leadership of Change in Higher Education (3rd edition). Ben Hunt is Executive Officer (Education) at Queen Mary University of London.

    In contrast to the adage that ‘good strategy closes doors’, the Office for Students (OfS) Strategy consultation has left many options open. This is true of the Teaching Excellence Framework (TEF), which the OfS intends to bring into alignment with its wider quality regime:            

    TEF will be the core of our new integrated approach to quality, with assessment activity becoming more routine and more widespread to ensure that institutions are delivering high quality academic experiences and positive outcomes’.

    Cart before the horse?

    The OfS has stated in its consultation that it will expand its quality assessment regime without evaluating how this exercise has, or will, enhance education provision.

    Previous investigations were seen as burdensome and lacking transparency.[1] On transparency, Professor Amanda Broderick, Vice-Chancellor & President at the University of East London, reflected on a quality investigation: ‘…we were not informed of what the OfS’s concerns had been at any point of the review’.

    On burden, Professor David Phoenix, Vice-Chancellor of London South Bank University, has written about an investigation at his provider: ‘…providers are already very used to…scrutiny. Professional and regulatory bodies (PSRBs) have their own approaches to course review and validation, and in many instances the level of scrutiny can greatly exceed that of the OfS’.

    And in a recent HEPI blog, the ex-higher education minister and architect of TEF Lord Jo Johnson asserts that the OfS has consistently deprioritised innovation.

    So perhaps the OfS has reached a moment of choice: to stick or pivot.

    Stephanie Marshall has written previously about the different global ‘pivots’ in higher education quality: ‘massification, quality assurance, quality enhancement, and then a move to addressing equity deploying large data’.

    The OfS’s decision to pause new provider entrants has arguably stalled massification. It is duplicative when it comes to assurance with other regulators such as Ofsted. And its deployment of data through the Data Futures process is beset by delays. Instead of enabling providers to embrace change, an unintended consequence of these decisions is that sector innovation is slowed. Amidst this and the sector’s financial challenges, the OfS seeks to expand its investigatory regime without a clear theory of change linked to enhancement.

    Pivot Part 1: From assurance to fremragende

    In a Norwegian report to which Marshall contributed, it was noted that: ‘In English, the term ‘excellence’ is now much overused…In Norwegian the word “fremragende” has a sense of moving forward (frem) and upward (tall or reaching above the rest, ragende) and is reserved to describe something really cutting-edge’.[2] 

    Centres for disciplinary excellence in education were established in Norway through the Centres for Excellence (CfE) Initiative, introduced by their previous Quality Assurance body, NOKUT. To be eligible for CfE status and funding, higher education institutions had to meet baseline standards and evaluate the distinctive quality of their provision. Each Centre selected its own criteria aligned to the provider’s vision and mission.

    Of course, there were challenges with this process, particularly when it came to differences in judgements of the panel assessing, against the institution being assessed. However, NOKUT was open to evolving its views, positioning itself as a ‘critical friend’. This process set out to be supportive and iterative, focused on both past impact and continuous improvement. The success of this approach has been validated over the years by regular evaluations of the impact of the scheme.

    In England there were 227 providers who participated in TEF. Adopting a system from a country with 21 higher education providers is clearly not practical. The important lessons are, firstly, a critical friend approach can be beneficial to enhancement, and, secondly, institutions can be trusted to evolve some of their quality metrics in line with their mission and values. This is particularly important in a system as diverse as in England where most providers are already above the quality baseline.

    Fremragende may be a more accurate framing of authentic educational enhancement rather than the English buzzword ‘excellence’. Frenragende suggests an ongoing journey: a verb rather than a noun. The higher education environment is and will be in a state of flux where quality frameworks need to be agile and unlock innovation, particularly in the territory of AI.

    Pivot Part 2: Enabling enhancement through data

    The OfS has a basket of lagging indicators: the National Student Survey (NSS) and Graduate Outcomes Survey (GOS) which comprise the TEF. If they are utilised in the next TEF, which seems likely, one way to begin to move from assurance to continuous improvement could be for the OfS to encourage greater use of the optional NSS bank. There are additional questions in place regarding the views of healthcare students, and several optional additional questions. An integrated approach could also be taken to the questions within the GOS, either enabling some optional questions for graduates, or mapping the GOS questions to those in the NSS.

    This flexibility would demonstrate trust, give providers a way to articulate ‘learning gain’, and capture the diversity in the sector. It would also maintain many of the positive aspects of TEF for key stakeholders, including the centrality of the student voice through the NSS and other mechanisms.        

    Pivot Part 3: Quality through partnership

    Any approach to integration should be a partnership with students, providers, international organisations and employers. We hope that entrance into the International Network for Quality Assurance Agencies in Higher Education will enable the OfS to collaborate with other global quality bodies.

    The OfS should consider how, in its assessment of excellence, it integrates learning from other inspection regimes, such as Ofsted and existing PSRB requirements. Through this, it should reduce regulatory duplication. This is in line with the Regulator’s Code principle of ‘collect once, use many times’.

    A mindset shift from assessing the baseline to forward-facing, continuous enhancement is required, both by the OfS and the sector. With further contextualisation of provision, the sector can exercise its autonomy to drive excellence, and the OfS can fulfil its statutory role in enabling quality and innovation. 

    Let’s join our Norwegian colleagues in adopting the fremragende approach.

     

     

    Source link