Category: Quality

  • Reclaiming the narrative of educational excellence despite the decline of educational gain

    Reclaiming the narrative of educational excellence despite the decline of educational gain

    There was a time when enhancement was the sector’s watchword.

    Under the Higher Education Funding Council for England (HEFCE), concepts like educational gain captured the idea that universities should focus not only on assuring quality, but on improving it. Teaching enhancement funds, learning and teaching strategies, and collaborative initiatives flourished. Today, that language has all but disappeared. The conversation has shifted from enhancement to assurance, from curiosity to compliance. Educational gain has quietly declined, not as an idea, but as a priority.

    Educational gain was never a perfect concept. Like its cousin learning gain, it struggled to be measured in ways that were meaningful across disciplines, institutions, and student journeys. Yet its value lay less in what it measured than in what it symbolised. It represented a shared belief that higher education is about transformation: the development of knowledge, capability, and identity through the act of learning. It reminded us that the student experience was not reducible to outcomes, but highly personal, developmental, and distinctive.

    Shifting sands

    The shift from HEFCE to the Office for Students (OfS) marked more than a change of regulator; it signalled a change in the state’s philosophy, from partnership to performance management. The emphasis moved from enhancement to accountability. Where HEFCE invested in collaborative improvement, OfS measures and monitors. Where enhancement assumed trust in the professional judgement of universities and their staff, regulation presumes the need for assurance through metrics. This has shaped the sector’s language: risk, compliance, outcomes, baselines – all necessary, perhaps, but narrowing.

    The latest OfS proposals on revising the Teaching Excellence Framework mark a shift in their treatment of “educational gain.” Rather than developing new measures or asking institutions to present their own evidence of gain, OfS now proposes removing this element entirely, on the grounds that it produced inconsistent and non-comparable evidence. This change is significant: it signals a tighter focus on standardised outcomes indicators. Yet by narrowing the frame in this way, we risk losing sight of the broader educational gains that matter most to students, gains that are diverse, contextual, and resistant to capture through a uniform set of metrics. It speaks to a familiar truth: “not everything that counts can be counted, and not everything that can be counted counts”.

    And this narrowing has consequences. When national frameworks reduce quality to a narrow set of indicators, they risk erasing the very distinctiveness that defines higher education. Within a framework of uniform metrics, where does the space remain for difference, for innovation, for the unique forms of learning that make higher education a rich and diverse ecosystem? If we are all accountable to the same measures, it becomes even more important that we define for ourselves what excellence in education looks like, within disciplines, within institutions, and within the communities we serve.

    Engine room

    This is where the idea of enhancement again becomes critical. Enhancement is the engine of educational innovation: it drives new methods, new thinking, and the continuous improvement of the student experience. Without enhancement, innovation risks becoming ornamental: flashes of good practice without sustained institutional learning. The loss of “educational gain” as a guiding idea has coincided with a hollowing out of that enhancement mindset. We have become good at reporting quality, but less confident in building it.

    Reclaiming the narrative of excellence is, therefore, not simply about recognition and reward; it is about re-establishing the connection between excellence and enhancement. Excellence is what we value, enhancement is how we realise it. The Universitas 21 project Redefining Teaching Excellence in Research-Intensive Universities speaks directly to this need. It asks: if we are to value teaching as we do research, how do we define excellence on our own terms? What does excellence look like in an environment where metrics are shared but missions are not?

    For research-intensive universities in particular, this question matters. These institutions are often defined by their research outputs and global rankings, yet they also possess distinctive educational strengths: disciplinary depth, scholarly teaching, and research-informed curricula. Redefining teaching excellence means articulating those strengths clearly, and ensuring they are recognised, rewarded, and shared. It also means returning to the principle of enhancement: a commitment to continual improvement, collegial learning, and innovation grounded in scholarship.

    Compass point

    The challenge, and opportunity, for the sector is to rebuild the infrastructure that once supported enhancement. HEFCE-era initiatives, from the Subject Centres to the Higher Education Academy, created national and disciplinary communities of practice. They gave legitimacy to innovation and space for experimentation. The dismantling of that infrastructure has left many educators working in isolation, without the shared structures that once turned good teaching into collective progress. Reclaiming enhancement will require new forms of collaboration, cross-institutional, international, and interdisciplinary, that enable staff to learn from one another and build capacity for educational change.

    If educational gain as a metric was flawed, educational gain as an ambition is not. It reminds us that the purpose of higher education is not only to produce measurable outcomes but to foster human and intellectual development. It is about what students become, not just what they achieve. As generative AI reshapes how students learn and how knowledge itself is constructed, this broader conception of gain becomes more vital than ever. In this new context, enhancement is about helping students, and staff, to adapt, to grow, and to keep learning.

    So perhaps it is time to bring back “educational gain,” not as a measure, but as a mindset; a reminder that excellence in education cannot be mandated through policy or reduced to data. It must be defined and driven by universities themselves, through thoughtful design, collaborative enhancement, and continual renewal.

    Excellence is the destination, but enhancement is the journey. If we are serious about defining one, we must rediscover the other.

    Source link

  • The Office for Students steps on to shaky ground in an attempt to regulate academic standards

    The Office for Students steps on to shaky ground in an attempt to regulate academic standards

    The funny thing about the story about today’s intervention by the Office for Students is that it is not really about grade inflation, or degree algorithms.

    I mean, it is on one level: we get three investigation reports on providers related to registration condition B4, and an accompanying “lessons learned” report that focuses on degree algorithms.

    But the central question is about academic standards – how they are upheld, and what role an arm of the government has in upholding them.

    And it is about whether OfS has the ability to state that three providers are at “increased risk” of breaching a condition of registration on the scant evidence of grade inflation presented.

    And it is certainly about whether OfS is actually able to dictate (or even strongly hint at its revealed preferences on) the way degrees are awarded at individual providers, or the way academic standards are upheld.

    If you are looking for the rule book

    Paragraph 335N(b) of the OfS Regulatory Framework is the sum total of the advice it has offered before today to the sector on degree algorithms.

    The design of the calculations that take in a collection of module marks (each assessed carefully against criteria set out in the module handbook, and cross-checked against the understanding of what expectations of students should be offered by an academic from another university) into an award of a degree at a given classification is a potential area of concern:

    where a provider has changed its degree classification algorithm, or other aspects of its academic regulations, such that students are likely to receive a higher classification than previous students without an increase in their level of achievement.

    These circumstances could potentially be a breach of condition of registration B4, which relates to “Assessment and Awards” – specifically condition B4.2(c), which requires that:

    academic regulations are designed to ensure that relevant awards are credible;

    Or B4.2(e), which requires that:

    relevant awards granted to students are credible at the point of being granted and when compared to those granted previously

    The current version of condition B4 came into force in May 2022.

    In the mighty list of things that OfS needs to have regard to that we know and love (section 2 of the 2017 Higher Education and Research Act), we learn that OfS has to pay mind to “the need to protect the institutional autonomy of English higher education providers” – and, in the way it regulates that it should be:

    Transparent, accountable, proportionate, and consistent and […] targeted only at cases where action is needed

    Mutant algorithms

    With all this in mind, we look at the way the regulator has acted on this latest intervention on grade inflation.

    Historically the approach has been one of assessing “unexplained” (even once, horrifyingly, “unwarranted”) good honours (1 or 2:1) degrees. There’s much more elsewhere on Wonkhe, but in essence OfS came up with its own algorithm – taking into account the degrees awarded in 2010-11 and the varying proportions students in given subject areas, with given A levels and of a given age – that starts from the position that non-traditional students shouldn’t be getting as many good grades as their (three good A level straight from school) peers, and if they did then this was potentially evidence of a problem.

    To quote from annex B (“statistical modelling”) of last year’s release:

    “We interact subject of study, entry qualifications and age with year of graduation to account for changes in awarding […] our model allows us to statistically predict the proportion of graduates awarded a first or an upper second class degree, or a first class degree, accounting for the effects of these explanatory variables.

    When I wrote this up last year I did a plot of the impact each of these variables is expected to have on – the fixed effect coefficient estimates show the increase (or decrease) in the likelihood of a person getting a first or upper second class degree.

    [Full screen]

    One is tempted to wonder whether the bit of OfS that deals with this issue ever speaks to the bit that is determined to drive out awarding gaps based on socio-economic background (which, as we know, very closely correlates with A level results). This is certainly one way of explaining why – if you look at the raw numbers – the people who award more first class and 2:1 degrees are the Russell Group, and at small selective specialist providers.

    [Full screen]

    Based on this model (which for 2023-24 failed to accurately predict fully fifty per cent of the grades awarded) OfS selected – back in 2022(!) – three providers where it felt that the “unexplained” awards had risen surprisingly quickly over a single year.

    What OfS found (and didn’t find)

    Teesside University was not found to have ever been in breach of condition B4 – OfS was unable to identify statistically significant differences in the proportion of “good” honours awarded to a single cohort of students if it applied each of the three algorithms Teesside has used over the past decade or so. There has been – we can unequivocally say – no evidence of artificial grade inflation at Teesside University.

    St Mary’s University, Twickenham and the University of West London were found to have historically been in breach of condition B4. The St Mary’s issue related to an approach that was introduced in 2016-17 and was replaced in 2021-22, in West London the offending practice was introduced in 2015-16 and replaced in 2021-22. In both cases, the replacement was made because of an identified risk of grade inflation. And for each provider a small number of students may have had their final award calculated using the old approach since 2021-22, based on a need to not arbitrarily change an approach that students had already been told about.

    To be clear – there is no evidence that either university has breached condition B4 (not least because condition B4 came into force after the offending algorithms had been replaced). In each instance the provider in question has made changes based on the evidence it has seen that an aspect of the algorithm is not having the desired effect, exactly the way in which assurance processes should (and generally do) work.

    Despite none of the providers in question currently being in breach of B4 all three are now judged to be at an increased risk of breaching condition B4.

    No evidence has been provided as to why these three particular institutions are at an “increased risk” of a breach while others who may use substantially identical approaches to calculating final degree awards (but have not been lucky enough to undergo an OfS inspection on grade inflation) are not. Each is required to conduct a “calibration exercise” – basically a review of their approach to awarding undergraduate degrees of the sort each has already completed (and made changes based on) in recent years.

    Vibes-based regulation

    Alongside these three combined investigation/regulatory decision publications comes a report on Batchelors’ degree classification algorithms. This purports to set out the “lessons learned” from the three reports, but it actually sets up what amounts to a revision to condition B4.

    We recognise that we have not previously published our views relating to the use of algorithms in the awarding of degrees. We look forward to positive engagement with the sector about the contents of this report. Once the providers we have investigated have completed the actions they have agreed to undertake, we may update it to reflect the findings from those exercises.

    The important word here is “views”. OfS expresses some views on the design of degree algorithms, but it is not the first to do so and there are other equally valid views held by professional bodies, providers, and others – there is a live debate and a substantial academic literature on the topic. Academia is the natural home of this kind of exchange of views, and in the crucible of scholarly debate evidence and logical consistency are winning moves. Having looked at every algorithm he could find, Jim Dickinson covers the debates over algorithm characteristics elsewhere on the site.

    It does feel like these might be views expressed ahead of a change to condition B4 – something that OfS does have the power to do, but would most likely (in terms of good regulatory practice, and the sensitive nature of work related to academic standards managed elsewhere in the UK by providers themselves) be subject to a full consultation. OfS is suggesting that it is likely to find certain practices incompatible with the current B4 requirements – something which amounts to a de facto change in the rules even if it has been done under the guise of guidance.

    Providers are reminded that (as they are already expected to do) they must monitor the accuracy and reliability of current and future degree algorithms – and there is a new reportable event: providers need to tell OfS if they change their algorithm that may result in an increase of “good” honours degrees awarded.

    And – this is the kicker – when they do make these changes, the external calibration they do cannot relate to external examiner judgements. The belief here is that external examiners only ever work at a module level, and don’t have a view over an entire course.

    There is even a caveat – a provider might ask a current or former external examiner to take an external look at their algorithm in a calibration exercise, but the provider shouldn’t rely solely on their views as a “fresh perspective” is needed. This reads back to that rather confusing section of the recent white paper about “assessing the merits of the sector continuing to use the external examiner system” while apparently ignoring the bit around “building the evidence base” and “seeking employers views”.

    Academic judgement

    Historically, all this has been a matter for the sector – academic standards in the UK’s world-leading higher education sector have been set and maintained by academics. As long ago as 2019 the UK Standing Committee for Quality Assessment (now known as the Quality Council for UK Higher Education) published a Statement of Intent on fairness in degree classification.

    It is short, clear and to the point: as was then the fashion in quality assurance circles. Right now we are concerned with paragraph b, which commits providers to protecting the value of their degrees by:

    reviewing and explaining how their process for calculating final classifications, fully reflect student attainment against learning criteria, protect the integrity of classification boundary conventions, and maintain comparability of qualifications in the sector and over time

    That’s pretty uncontroversial, as is the recommended implementation pathway in England: a published “degree outcomes statement” articulating the results of an internal institutional review.

    The idea was that these statements would show the kind of quantitative trends that OfS get interested in, some assurance that these institutional assessment processes meet the reference points, and reflect the expertise and experience of external examiners, and provide a clear and publicly accessible rationale for the degree algorithm. As Jim sets out elsewhere, in the main this has happened – though it hasn’t been an unqualified success.

    To be continued

    The release of this documentation prompts a number of questions, both on the specifics of what is being done and more widely on the way in which this approach does (or does not) constitute good regulatory practice.

    It is fair to ask, for instance, whether OfS has the power to decide that it has concerns about particular degree awarding practices, even where it is unable to point to evidence that these practices are currently having a significant impact on degrees awarded, and to promote a de facto change in interpretation of regulation that will discourage their use.

    Likewise, it seems problematic that OfS believes it has the power to declare that the three providers it investigated are at risk of breaching a condition of registration because they have an approach to awarding degrees that it has decided that it doesn’t like.

    It is concerning that these three providers have been announced as being at higher risk of a breach when other providers with similar practices have not. It is worth asking whether this outcome meets the criteria for transparent, accountable, proportionate, and consistent regulatory practice – and whether it represents action being targeted only at cases where it is demonstrably needed.

    More widely, the power to determine or limit the role and purpose of external examiners in upholding academic standards has not historically been one held by a regulator acting on behalf of the government. The external examiner system is a “sector recognised standard” (in the traditional sense) and generally commands the confidence of registered higher education providers. And it is clearly a matter of institutional autonomy – remember in HERA OfS needs to “have regard to” institutional autonomy over assessment, and it is difficult to square this intervention with that duty.

    And there is the worry about the value and impact of sector consultation – an issue picked up in the Industry and Regulators Committee review of OfS. Should a regulator really be initiating a “dialogue with the sector” when its preferences on the external examiner system are already so clearly stated? And it isn’t just the sector – a consultation needs to ensure that the the views of employers (and other stakeholders, including professional bodies) are reflected in whatever becomes the final decision.

    Much of this may become clear over time – there is surely more to follow in the wider overhaul of assurance, quality, and standards regulation that was heralded in the post-16 white paper. A full consultation will help centre the views of employers, course leaders, graduates, and professional bodies – and the parallel work on bringing the OfS quality functions back into alignment with international standards will clearly also have an impact.

    Source link

  • Why busy educators need AI with guardrails

    Why busy educators need AI with guardrails

    Key points:

    In the growing conversation around AI in education, speed and efficiency often take center stage, but that focus can tempt busy educators to use what’s fast rather than what’s best. To truly serve teachers–and above all, students–AI must be built with intention and clear constraints that prioritize instructional quality, ensuring efficiency never comes at the expense of what learners need most.

    AI doesn’t inherently understand fairness, instructional nuance, or educational standards. It mirrors its training and guidance, usually as a capable generalist rather than a specialist. Without deliberate design, AI can produce content that’s misaligned or confusing. In education, fairness means an assessment measures only the intended skill and does so comparably for students from different backgrounds, languages, and abilities–without hidden barriers unrelated to what’s being assessed. Effective AI systems in schools need embedded controls to avoid construct‑irrelevant content: elements that distract from what’s actually being measured.

    For example, a math question shouldn’t hinge on dense prose, niche sports knowledge, or culturally-specific idioms unless those are part of the goal; visuals shouldn’t rely on low-contrast colors that are hard to see; audio shouldn’t assume a single accent; and timing shouldn’t penalize students if speed isn’t the construct.

    To improve fairness and accuracy in assessments:

    • Avoid construct-irrelevant content: Ensure test questions focus only on the skills and knowledge being assessed.
    • Use AI tools with built-in fairness controls: Generic AI models may not inherently understand fairness; choose tools designed specifically for educational contexts.
    • Train AI on expert-authored content: AI is only as fair and accurate as the data and expertise it’s trained on. Use models built with input from experienced educators and psychometricians.

    These subtleties matter. General-purpose AI tools, left untuned, often miss them.

    The risk of relying on convenience

    Educators face immense time pressures. It’s tempting to use AI to quickly generate assessments or learning materials. But speed can obscure deeper issues. A question might look fine on the surface but fail to meet cognitive complexity standards or align with curriculum goals. These aren’t always easy problems to spot, but they can impact student learning.

    To choose the right AI tools:

    • Select domain-specific AI over general models: Tools tailored for education are more likely to produce pedagogically-sound and standards-aligned content that empowers students to succeed. In a 2024 University of Pennsylvania study, students using a customized AI tutor scored 127 percent higher on practice problems than those without.
    • Be cautious with out-of-the-box AI: Without expertise, educators may struggle to critique or validate AI-generated content, risking poor-quality assessments.
    • Understand the limitations of general AI: While capable of generating content, general models may lack depth in educational theory and assessment design.

    General AI tools can get you 60 percent of the way there. But that last 40 percent is the part that ensures quality, fairness, and educational value. This requires expertise to get right. That’s where structured, guided AI becomes essential.

    Building AI that thinks like an educator

    Developing AI for education requires close collaboration with psychometricians and subject matter experts to shape how the system behaves. This helps ensure it produces content that’s not just technically correct, but pedagogically sound.

    To ensure quality in AI-generated content:

    • Involve experts in the development process: Psychometricians and educators should review AI outputs to ensure alignment with learning goals and standards.
    • Use manual review cycles: Unlike benchmark-driven models, educational AI requires human evaluation to validate quality and relevance.
    • Focus on cognitive complexity: Design assessments with varied difficulty levels and ensure they measure intended constructs.

    This process is iterative and manual. It’s grounded in real-world educational standards, not just benchmark scores.

    Personalization needs structure

    AI’s ability to personalize learning is promising. But without structure, personalization can lead students off track. AI might guide learners toward content that’s irrelevant or misaligned with their goals. That’s why personalization must be paired with oversight and intentional design.

    To harness personalization responsibly:

    • Let experts set goals and guardrails: Define standards, scope and sequence, and success criteria; AI adapts within those boundaries.
    • Use AI for diagnostics and drafting, not decisions: Have it flag gaps, suggest resources, and generate practice, while educators curate and approve.
    • Preserve curricular coherence: Keep prerequisites, spacing, and transfer in view so learners don’t drift into content that’s engaging but misaligned.
    • Support educator literacy in AI: Professional development is key to helping teachers use AI effectively and responsibly.

    It’s not enough to adapt–the adaptation must be meaningful and educationally coherent.

    AI can accelerate content creation and internal workflows. But speed alone isn’t a virtue. Without scrutiny, fast outputs can compromise quality.

    To maintain efficiency and innovation:

    • Use AI to streamline internal processes: Beyond student-facing tools, AI can help educators and institutions build resources faster and more efficiently.
    • Maintain high standards despite automation: Even as AI accelerates content creation, human oversight is essential to uphold educational quality.

    Responsible use of AI requires processes that ensure every AI-generated item is part of a system designed to uphold educational integrity.

    An effective approach to AI in education is driven by concern–not fear, but responsibility. Educators are doing their best under challenging conditions, and the goal should be building AI tools that support their work.

    When frameworks and safeguards are built-in, what reaches students is more likely to be accurate, fair, and aligned with learning goals.

    In education, trust is foundational. And trust in AI starts with thoughtful design, expert oversight, and a deep respect for the work educators do every day.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • From improvement to compliance – a significant shift in the purpose of the TEF

    From improvement to compliance – a significant shift in the purpose of the TEF

    The Teaching Excellence Framework has always had multiple aims.

    It was partly intended to rebalance institutional focus from research towards teaching and student experience. Jo Johnson, the minister who implemented it, saw it as a means of increasing undergraduate teaching resources in line with inflation.

    Dame Shirley Pearce prioritised enhancing quality in her excellent review of TEF implementation. And there have been other purposes of the TEF: a device to support regulatory interventions where quality fell below required thresholds, and as a resource for student choice.

    And none of this should ignore its enthusiastic adoption by student recruitment teams as a marketing tool.

    As former Chair and Deputy Chair of the TEF, we are perhaps more aware than most of these competing purposes, and more experienced in understanding how regulators, institutions and assessors have navigated the complexity of TEF implementation. The TEF has had its critics – something else we are keenly aware of – but it has had a marked impact.

    Its benchmarked indicator sets have driven a data-informed and strategic approach to institutional improvement. Its concern with disparities for underrepresented groups has raised the profile of equity in institutional education strategies. Its whole institution sweep has made institutions alert to the consequences of poorly targeted education strategies and prioritised improvement goals. Now, the publication of the OfS’s consultation paper on the future of the TEF is an opportunity to reflect on how the TEF is changing and what it means for the regulatory and quality framework in England.

    A shift in purpose

    The consultation proposes that the TEF becomes part of what the OfS sees as a more integrated quality system. All registered providers will face TEF assessments, with no exemptions for small providers. Given the number of new providers seeking OfS registration, it is likely that the number to be assessed will be considerably larger than the 227 institutions in the 2023 TEF.

    Partly because of the larger number of assessments to be undertaken, TEF will move to a rolling cycle, with a pool of assessors. Institutions will still be awarded three grades – one for outcomes, one for experience and one overall, but their overall grade will simply be the lower of the two other grades. The real impact of this will be on Bronze-rated providers who could find themselves subject to a range of measures, potentially including student number controls or fee constraints, until they show improvement.

    The OfS consultation paper marks a significant shift in the purpose of the TEF, from quality enhancement to regulation and from improvement to compliance. The most significant changes are at the lower end of assessed performance. The consultation paper makes sensible changes to aspects of the TEF which always posed challenges for assessors and regulators, tidying up the relationship between the threshold B3 standards and the lowest TEF grades. It correctly separates measures of institutional performance on continuation and completion – over which institutions have more direct influence – from progression to employment – over which institutions have less influence.

    Pressure points

    But it does this at some heavy costs. By treating the Bronze grade as a measure of performance at, rather than above, threshold quality, it will produce just two grades above the threshold. In shifting the focus towards quantitative indicators and away from institutional discussion of context, it will make TEF life more difficult for further education institutions and institutions in locations with challenging graduate labour markets. The replacement of the student submission with student focus groups may allow more depth on some issues, but comes at the expense of breadth, and the student voice is, disappointingly, weakened.

    There are further losses as the regulatory purpose is embedded. The most significant is the move away from educational gain, and this is a real loss: following TEF 2023, almost all institutions were developing their approaches to and evaluation of educational gain, and we have seen many examples where this was shaping fruitful approaches to articulating institutional goals and the way they shape educational provision.

    Educational gain is an area in which institutions were increasingly thinking about distinctiveness and how it informs student experience. It is a real loss to see it go, and it will weaken the power of many education strategies. It is almost certainly the case that the ideas of educational gain and distinctiveness are going to be required for confident performance at the highest levels of achievement, but it is a real pity that it is less explicit. Educational gain can drive distinctiveness, and distinctiveness can drive quality.

    Two sorts of institutions will face the most significant challenges. The first, obviously, are providers rated Bronze in 2023, or Silver-rated providers whose indicators are on a downward trajectory. Eleven universities were given a Bronze rating overall in the last TEF exercise – and 21 received Bronze either for the student experience or student outcomes aspects. Of the 21, only three Bronzes were for student outcomes, but under the OfS plans, all would be graded Bronze, since any institution would be given its lowest aspect grade as its overall grade. Under the proposals, Bronze-graded institutions will need to address concerns rapidly to mitigate impacts on growth plans, funding, prestige and competitive position.

    The second group facing significant challenges will be those in difficult local and regional labour markets. Of the 18 institutions with Bronze in one of the two aspects of TEF 2023, only three were graded bronze for student outcomes, whereas 15 were for student experience. Arguably this was to be expected when only two of the six features of student outcomes had associated indicators: continuation/completion and progression.

    In other words, if indicators were substantially below benchmark, there were opportunities to show how outcomes were supported and educational gain was developed. Under the new proposals, the approach to assessing student outcomes is largely, if not exclusively, indicator-based, for continuation and completion. The approach is likely to reinforce differences between institutions, and especially those with intakes from underrepresented populations.

    The stakes

    The new TEF will play out in different ways in different parts of the sector. The regulatory focus will increase pressure on some institutions, whilst appearing to relieve it in others. For those institutions operating at 2023 Bronze levels or where 2023 Silver performance is declining, the negative consequences of a poor performance in the new TEF, which may include student number controls, will loom large in institutional strategy. The stakes are now higher for these institutions.

    On the other hand, institutions whose graduate employment and earnings outcomes are strong, are likely to feel more relieved, though careful reading of the grade specifications for higher performance suggests that there is work to be done on education strategies in even the best-performing 2023 institutions.

    In public policy, lifting the floor – by addressing regulatory compliance – and raising the ceiling – by promoting improvement – at the same time is always difficult, but the OfS consultation seems to have landed decisively on the side of compliance rather than improvement.

    Source link

  • TEF6: the incredible machine takes over quality assurance regulation

    TEF6: the incredible machine takes over quality assurance regulation

    If you loved the Teaching Excellence Framework, were thrilled by the outcomes (B3) thresholds, lost your mind for the Equality of Opportunity Risk Register, and delighted to the sporadic risk-based OfS investigations based on years-old data you’ll find a lot to love in the latest set of Office for Students proposals on quality assurance.

    In today’s Consultation on the future approach to quality regulation you’ll find a cyclical, cohort based TEF that also includes a measurement (against benchmarks) of compliance with the thresholds for student outcomes inscribed in the B3 condition. Based on the outcomes of this super-TEF and prioritised based on assessment of risk, OfS will make interventions (including controls on recruitment and the conditions of degree awarding powers) and targeted investigation. This is a first stage consultation only, stage two will come in August 2026.

    It’s not quite a grand unified theory: we don’t mix in the rest of the B conditions (covering less pressing matters like academic standards, the academic experience, student support, assessment) because, in the words of OfS:

    Such an approach would be likely to involve visits to all providers, to assess whether they meet all the relevant B conditions of registration

    The students who are struggling right now with the impacts of higher student/staff ratios and a lack of capacity due to over-recruitment will greatly appreciate this reduction in administrative burden.

    Where we left things

    When we last considered TEF we were expecting an exercise every four years, drawing on provider narrative submissions (which included a chunk on a provider’s own definition and measurement of educational gain), students’ union narrative submissions, and data on outcomes and student satisfactions. Providers were awarded a “medal” for each of student outcomes and student experience – a matrix determined whether this resulted in an overall Bronze, Silver, Gold or Requires Improvement.

    The first three of these awards were deemed to be above minimum standards (with slight differences between each), while the latter was a portal to the much more punitive world of regulation under group B (student experience) conditions of registration. Most of the good bits of this approach came from the genuinely superb Pearce Review of TEF conducted under section 26 of the Higher Education and Research Act, which fixed a lot of the statistical and process nonsense that had crept in under previous iterations and then-current plans (though not every recommendation was implemented).

    TEF awards were last made in 2023, with the next iteration – involving all registered providers plus anyone else who wanted to play along – was due in 2027.

    Perma-TEF

    A return to a rolling TEF rather than a quadrennial quality enhancement jamboree means a pool of TEF assessors rather than a one-off panel. There will be steps taken to ensure that an appropriate group of academic and student assessors is selected to assess each cohort – there will be special efforts made to use those with experience of smaller, specialist, and college-based providers – and a tenure of two-to-three years is planned. OfS is also considering whether its staff can be included among the storied ranks of those empowered to facilitate ratings decisions.

    Likewise, we’ll need a more established appeals system. Open only to those with Bronze or Needs Improvement ratings (Gold and Silver are passing grades) it would be a way to potentially forestall engagement and investigations based on an active risk to student experience or outcomes, or a risk of a future breach of a condition of registration for Bronze or Requires Improvement.

    Each provider would be assessed once every three years – all providers taking part in the first cycle would be assessed in either 2027-28, 2028-29, or 2029-30 (which covers only undergraduate students because there’s no postgraduate NSS yet – OfS plan to develop one before 2030). In many cases they’ll only know which one at the start of the academic year in question, which will give them six months to get their submissions sorted.

    Because Bronze is now bad (rather than “good but not great” as it used to be) the first year’s could well include all providers with a 2023 Bronze (or Requires Improvement) rating, plus some with increased risks of non-compliance, some with Bronze in one of the TEF aspects, and some without a rating.

    After this, how often you are assessed depends on your rating – if you are Gold overall it is five years till the next try, Silver means four years, and Bronze three (if you are “Requires Improvement” you probably have other concerns beyond the date of your next assessment) but this can be tweaked if OfS decides there is an increased risk to quality or for any other reason.

    Snakes and ladders

    Ignore the gradations and matrices in the Pearce Review – the plan now is that your lowest TEF aspect rating (remember you got sub-awards last time for student experience and student outcomes) will be your overall rating. So Silver for experience and Bronze for outcomes makes for an overall Bronze. As OfS has decided that you now have to pay (likely around £25,000) to enter what is a compulsory exercise this is a cost that could lead to a larger cost in future.

    In previous TEFs, the only negative consequence for those outside of the top ratings have been reputational – a loss of bragging rights of, arguably, negligible value. The new proposals align Bronze with the (B3) minimum required standards and put Requires Improvement below these: in the new calculus of value the minimum is not good enough and there will be consequences.

    We’ve already had some hints that a link to fee cap levels is back on the cards, but in the meantime OfS is pondering a cap on student numbers expansion to punish those who turn out Bronze or Requires Improvement. The workings of the expansion cap will be familiar to those who recall the old additional student numbers process – increases of more than five per cent (the old tolerance band, which is still a lot) would not be permitted for poorly rated providers.

    For providers without degree awarding powers it is unlikely they will be successful in applying for them with Bronze and below – but OfS is also thinking about restricting aspects of existing providers DAPs, for example limiting their ability to subcontract or franchise provision in future. This is another de facto numbers cap in many cases, and is all ahead of a future consultation on DAPs that could make for an even closer link with TEF.

    Proposals for progression

    Proposal 6 will simplify the existing B3 thresholds, and integrate the way they are assessed into the TEF process. In a nutshell, the progression requirement for B3 would disappear – with the assessment made purely on continuation and completion, with providers able to submit contextual and historic information to explain why performance is not above the benchmark or threshold as a part of the TEF process.

    Progression will still be considered at the higher levels of TEF, and here contextual information can play more of a part – with what I propose we start calling the Norland Clause allowing providers to submit details of courses that lead to jobs that ONS does not consider as professional or managerial. That existing indicator will be joined by another based on (Graduate Outcomes) graduate reflections on how they are using what they have learned, and benchmarked salaries three years after graduation from DfE’s Longitudinal Educational Outcomes (LEO) data – in deference to that random Kemi Badenoch IFS commission at the tail end of the last parliament.

    Again, there will be contextual benchmarks for these measures (and hopefully some hefty caveating on the use of LEO median salaries) – and, as is the pattern in this consultation, there are detailed proposals to follow.

    Marginal gains, marginal losses

    The “educational gains” experiment, pioneered in the last TEF, is over: making this three times that a regulator in England has tried and failed to include a measure of learning gain in some form of regulation. OfS is still happy for you to mention your education gain work in your next narrative submission, but it isn’t compulsory. The reason: reducing burden, and a focus on comparability rather than a diversity of bespoke measures.

    Asking providers what something means in their context, rather than applying a one-size-fits-all measure of student success was an immensely powerful component of the last exercise. Providers who started on that journey at considerable expense in data gathering and analysis may be less than pleased at this latest development – and we’d certainly understood that DfE were fans of the approach too.

    Similarly, the requirement for students to feed back on students in their submissions to TEF has been removed. The ostensible reason is that students found it difficult last time round – the result is that insight from the valuable networks between existing students and their recently graduated peers is lost. The outcomes end of TEF is now very much data driven with only the chance to explain unusual results offered. It’s a retreat from some of the contextual sense that crept in with the Pearce Review.

    Business as usual

    Even though TEF now feels like it is everywhere and for always, there’s still a place for OfS’ regular risk-based monitoring – and annex I (yes, there’s that many annexes) contains a useful draft monitoring tool.

    Here it is very good to see staff:student ratios, falling entry requirements, a large growth in foundation year provision, and a rapid growth in numbers among what are noted as indicators of risk to the student experience. It is possible to examine an excellent system designed outside of the seemingly inviolate framework of the TEF where events like this would trigger an investigation of provider governance and quality assurance processes.

    Alas, the main use of this monitoring is to decide whether or not to bring a TEF assessment forward, something that punts an immediate risk to students into something that will be dealt with retrospectively. If I’m a student on a first year that has ballooned from 300 to 900 from one cycle to the next there is a lot of good a regulator can do by acting quickly – I am unlikely to care whether a Bronze or Silver award is made in a couple of years’ time.

    International principles

    One of the key recommendations of the Behan review on quality was a drawing together of the various disparate (and, yes, burdensome) streams of quality and standards assurance and enhancement into a unified whole. We obviously don’t quite get there – but there has been progress made towards another key sector bugbear that came up both in Behan and the Lords’ Industry and Regulators Committee review: adherence to international quality assurance standards (to facilitate international partnerships and, increasingly, recruitment).

    OfS will “work towards applying to join the European Quality Assurance Register for Higher Education” at the appropriate time – clearly feeling that the long overdue centring of the student voice in quality assurance (there will be an expanded role for and range of student assessors) and the incorporation of a cyclical element (to desk assessments at least) is enough to get them over the bar.

    It isn’t. Principle 2.1 of the EQAR ESG requires that “external quality assurance should address the effectiveness of the internal quality assurance processes” – philosophically establishing the key role of providers themselves in monitoring and upholding the quality of their own provision, with the external assurance process primarily assessing whether (and how well) this has been done. For whatever reason OfS believes the state (in the form of the regulator) needs to be (and is capable of being!) responsible for all, quality assurance everywhere, all the time. It’s a glaring weakness of the OfS system that urgently needs to be addressed. And it hasn’t been, this time.

    The upshot is that while the new system looks ESG-ish, it is unlikely to be judged to be in full compliance.

    Single word judgements

    The recent use of single headline judgements of educational quality being used in ways that have far reaching regulatory implications is hugely problematic. The government announced the abandonment of the old “requires improvement, inadequate, good, and outstanding” judgements for schools in favour of a more nuanced “report card approach” – driven in part by the death by suicide of headteacher Ruth Perry in 2023. The “inadequate” rating given to her Cavendish Primary School would have meant forced academisation and deeper regulatory oversight.

    Regulation and quality assurance in education needs to be rigorous and reliable – it also needs to be context-aware and focused on improvement rather than retribution. Giving single headline grades cute, Olympics-inspired names doesn’t really cut it – and as we approach the fifth redesign of an exercise that has only run six times since 2016 you would perhaps think that rather harder questions need to be asked about the value (and cost!) of this undertaking.

    If we want to assess and control the risks of modular provision, transnational education, rapid expansion, and a growing number of innovations in delivery we need providers as active partners in the process. If we want to let universities try new things we need to start from a position that we can trust universities to have a focus on the quality of the student experience that is robust and transparent. We are reaching the limits of the current approach. Bad actors will continue to get away with poor quality provision – students won’t see timely regulatory action to prevent this – and eventually someone is going to get hurt.

    Source link

  • Back to the future for the TEF? Back to school for OfS?

    Back to the future for the TEF? Back to school for OfS?

    As the new academic year dawns, there is a feeling of “back to the future” for the Teaching Excellent Framework (TEF).

    And it seems that the Office for Students (OfS) needs to go “back to school” in its understanding of the measurement of educational quality.

    Both of these feelings come from the OfS Chair’s suggestion that the level of undergraduate tuition fees institutions can charge may be linked to institutions’ TEF results.

    For those just joining us on TEF-Watch, this is where the TEF began back in the 2015 Green Paper.

    At that time, the idea of linking tuition fees to the TEF’s measure of quality was dropped pretty quickly because it was, and remains, totally unworkable in any fair and reasonable way.

    This is for a number of reasons that would be obvious to anyone who has a passing understanding of how the TEF measures educational quality, which I wrote about on Wonkhe at the time.

    Can’t work, won’t work

    First, the TEF does not measure the quality of individual degree programmes. It evaluates, in a fairly broad-brush way, a whole institution’s approach to teaching quality and related outcomes. All institutions have programmes of variable quality.

    This means that linking tuition fees to TEF outcomes could lead to significant numbers of students on lower quality programmes being charged the higher rate of tuition fees.

    Second, and even more unjustly, the TEF does not give any indication of the quality of education that students will directly experience.

    Rather, when they are applying for their degree programme, it provides a measure of an institution’s general teaching quality at the time of its last TEF assessment.

    Under the plans currently being considered for a rolling TEF, this could be up to five years previously – which would mean it gives a view of educational quality at least nine years before applicants will graduate. Even if it was from the year before they enrol, it will be based on an assessment of evidence that took place at least four years before they will complete their degree programme.

    Those knowledgeable about educational quality understand that, over such a time span, educational quality could have dramatically changed. Given this, on what basis can it be fair for new students to be charged the higher rate of tuition fees as a result of a general quality of education enjoyed by their predecessors?

    These two reasons would make a system in which tuition fees were linked to TEF outcomes incredibly unfair. And that is before we even consider its impact on the TEF as a valid measure of educational quality.

    The games universities play

    The higher the stakes in the TEF, the more institutions will feel forced to game the system. In the current state of financial crisis, any institutional leader is likely to feel almost compelled to pull every trick in the book in order to ensure the highest possible tuition fee income for their institution.

    How could they not given that it could make the difference between institutional survival, a forced merger or the potential closure of their institution? This would make the TEF even less of an effective measure of educational quality and much more of a measure of how effectively institutions can play the system.

    It takes very little understanding of such processes to see that institutions with the greatest resources will be in by far the best position to finance the playing of such games. Making the stakes so high for institutions would also remove any incentive for them to use the TEF as an opportunity to openly identify educational excellence and meaningfully reflect on their educational quality.

    This would mean that the TEF loses any potential to meet its core purpose, identified by the Independent Review of the TEF, “to identify excellence and encourage enhancement”. It will instead become even more of a highly pressurised marketing exercise with the TEF outcomes having potentially profound consequences for the future survival of some institutions.

    In its own terms, the suggestion about linking undergraduate tuition fees to TEF outcomes is nothing to worry about. It simply won’t happen. What is a much greater concern is that the OfS is publicly making this suggestion at a time when it is claiming it will work harder to advocate for the sector as a force for good, and also appears to have an insatiable appetite to dominate the measurement of educational quality in English higher education.

    Any regulator that had the capacity and expertise to do either of these things would simply not be making such a suggestion at any time but particularly not when the sector faces such a difficult financial outlook.

    An OfS out of touch with its impact on the sector. Haven’t we been here before?

    Source link

  • Catapult Learning is Awarded Tutoring Program Design Badge from Stanford University’s National Student Support Accelerator

    Catapult Learning is Awarded Tutoring Program Design Badge from Stanford University’s National Student Support Accelerator

    Organization recognized for excellence in high-impact tutoring design and student achievement gains

    PHILADELPHIA, Aug. 25, 2025 – Catapult Learning, a division of FullBloom that provides academic intervention programs for students and professional development solutions for teachers in K-12 schools, today announced it earned the Tutoring Program Design Badge from the National Student Support Accelerator (NSSA) at Stanford University. The designation, valid for three years, recognizes tutoring providers that demonstrate high-quality, research-aligned program design.

    The recognition comes at a time when the need for high-impact tutoring (HIT) has never been greater. As schools nationwide work to close learning gaps that widened during the COVID-19 pandemic and accelerate recovery, Catapult Learning stands out for its nearly 50-year legacy of delivering effective academic support to students who need it most.

    “Catapult Learning is honored to receive this prestigious national recognition from the NSSA at Stanford University,” said Rob Klapper, president at Catapult Learning. “We are excited to be recognized for our high-impact tutoring program design and will continue to uphold the highest standards of excellence as we support learners across the country.” 

    Each year, Catapult Learning’s programs support more than 150,000+ students with nearly four million in-person tutoring sessions, in partnership with 2,100 schools and districts nationwide. Its tutors, many of whom hold four-year degrees, are highly trained professionals who are supported with ongoing coaching and professional development.

    Recent data from Catapult Learning’s HIT programs show strong academic gains across both math and reading subject areas:

    • 8 out of every 10 math students increased their pre/post score
    • 9 out of every 10 reading students increased their pre/post score

    These results come from programs that have also earned a Tier 2 evidence designation under the Every Student Succeeds Act, affirming their alignment with rigorous research standards. 

    The Badge was awarded following a rigorous, evidence-based review conducted by an independent panel of education experts. The NSSA evaluated multiple components of Catapult Learning’s program – including instructional design, tutor training and support, and the use of data to inform instruction – against its Tutoring Quality Standards.

    “This designation underscores the strength and intentionality behind our high-impact tutoring model,” said Devon Wible, vice president of teaching and learning at Catapult Learning. “This achievement reflects our deep commitment to providing high-quality, research-based tutoring that drives meaningful outcomes for learners.”

    Tutoring is available in person, virtually, or in hybrid formats, and can be scheduled before, during, or after school, including weekends. Sessions are held a minimum of three times per week, with flexible options tailored to the needs of each school or district. Catapult Learning provides all necessary materials for both students and tutors.

    To learn more about Catapult Learning’s high-impact tutoring offerings, visit: https://catapultlearning.com/high-impact-tutoring/.

    About Catapult Learning

    Catapult Learning, a division of FullBloom, provides academic intervention programs for students and professional development solutions for teachers in K-12 schools, executed by a team of experienced coaches. Our professional development services strengthen the capacity of teachers and leaders to raise and sustain student achievement. Our academic intervention programs support struggling learners with instruction tailored to the unique needs of each student. Across the country, Catapult Learning partners with 500+ school districts to produce positive outcomes that promote academic and professional growth. Catapult Learning is accredited by Cognia and has earned its 2022 System of Distinction honor.  

    Latest posts by eSchool News Contributor (see all)

    Source link

  • The Society for Research into Higher Education in 1995

    The Society for Research into Higher Education in 1995

    by Rob Cuthbert

    In SRHE News and Blog a series of posts is chronicling, decade by decade, the progress of SRHE since its foundation 60 years ago in 1965. As always, our memories are supported by some music of the times.

    1995 was the year of the war in Bosnia and the Srebrenica massacre, the collapse of Barings Bank, and the Oklahoma Bombing. OJ Simpson was found not guilty of murder. US President Bill Clinton visited Ireland. President Nelson Mandela celebrated as South Africa won the Rugby World Cup, Blackburn Rovers won the English Premier League. Cliff Richard was knighted, Blur-v-Oasis fought the battle of Britpop, and Robbie Williams left Take That, causing heartache for millions. John Major was UK Prime Minister and saw off an internal party challenge to be re-elected as leader of the Conservative Party. It would be two years until D-Ream sang ‘Things can only get better’ as the theme tune for the election of New Labour in 1997. Microsoft released Windows 95, and Bill Gates became the world’s richest man. Media, news and communication had not yet been revolutionised by the internet.

    Higher education in 1995

    Higher education everywhere had been much changed in the preceding decade, not least in the UK, where the binary policy had ultimately proved vulnerable: The Polytechnic Experiment ended in 1992. Lee Harvey, the long-time editor of Quality in Higher Education, and his co-author Berit Askling (Gothenburg) argued that in retrospect:

    “The 1990s has been the decade of quality in higher education. There had been mechanisms for ensuring the quality of higher education for decades prior to the 1990s, including the external examiner system in the UK and other Commonwealth countries, the American system of accreditation, and government ministerial control in much of Europe and elsewhere in the world. The 1990s, though, saw a change in the approach to higher education quality.”

    In his own retrospective for the European Journal of Education on the previous decade of ‘interesting times’, Guy Neave (Twente) agreed there had been a ‘frenetic pace of adjustment’ but

    “Despite all that is said about the drive towards quality, enterprise, efficiency and accountability and despite the attention lavished on devising the mechanics of their operation, this revolution in institutional efficiency has been driven by the political process.”

    Europe saw institutional churn with the formation of many new university institutions – over 60 in Russia during 1985-1995 in the era of glasnost, and many others elsewhere, including Dublin City University and University of Limerick in 1989. Dublin Institute of Technology, created in 1992, would spend 24 years just waiting for the chance[1] to become a technological university. 1995 saw the establishment of Aalborg in Denmark and several new Chinese universities including Guangdong University of Technology.

    UK HE in 1995

    In the UK the HE participation rate had more than doubled between 1970 (8.4%) and 1990 (19.4%) and then it grew even faster, reaching 33% by 2000. At the end of 1994-1995 there were almost 950,000 full-time students in UK HE. Michael Shattock’s 1995 paper ‘British higher education in 2025’ fairly accurately predicted a 55% APR by 2025.

    There had been seismic changes to UK HE in the 1980s and early 1990s. Polytechnic directors had for some years been lobbying for an escape from unduly restrictive local authority bureaucratic controls, under which many institutions had, for example, not even been allowed to hold bank accounts in their own names. Even so, the National Advisory Body for Public Sector HE (NAB), adroitly steered by its chair Christopher Ball (Warden of Keble) and chief executive John Bevan, previously Director of Education for the Inner London Education Authority, had often outmanoeuvred the University Grants Committee (UGC) led by Peter Swinnerton-Dyer (Cambridge). By developing the idea of the ‘teaching unit of resource’ NAB had arguably embarrassed the UGC into an analysis which declared that universities were slightly less expensive for teaching, and the (significant) difference was the amount spent on research – hence determining the initial size of total research funding, then called QR.

    Local authorities realised too slowly that controlling large polytechnics as if they were schools was not appropriate. Their attempt to head off reforms was articulated in Management for a Purpose[2], a report on Good Management Practice (GMP) prepared under the auspices of NAB, which aimed to retain local authority strategic control of the institutions which they had, after all, created and developed. It was too little, too late. (I was joint secretary to the GMP group: I guess, now it’s time, for me to give up.) Secretary of State Kenneth Baker’s 1987 White Paper Higher Education: Meeting the Challenge was followed rapidly by the so-called ‘Great Education Reform Bill’, coming onto the statute book as the Education Reform Act 1988. The Act took the polytechnics out of local authorities, recreating them as independent higher education corporations; it dissolved the UGC and NAB and set up the Universities Funding Council (UFC) and the Polytechnics and Colleges Funding Council (PCFC). Local authorities were left high and dry and government didn’t think twice, with the inevitable progression to the Further and Higher Education Act 1992. The 1992 Act dissolved PCFC and UFC and set up Higher Education Funding Councils for England (HEFCE) and Wales (HEFCW). It also set up a new Further Education Funding Council (FEFC) for colleges reconstituted as FE corporations and dissolved the Council for National Academic Awards. The Smashing Pumpkins celebrated “the resolute urgency of now”, FE and HE had “come a long way” but Take That sensibly advised “Never forget where you’ve come here from”,

    Crucially, the Act allowed polytechnics to take university titles, subject to the approval of the Privy Council, and eventually 40 institutions did so in England, Wales and Scotland. In addition Cranfield was established by Royal Charter in 1993, and the University of Manchester Institute of Science and Technology became completely autonomous in 1994. The biggest hit in 1995 actually named an HE institution and its course, as Pulp sang: “She studied sculpture at St Martin’s College”. Not its proper name, but Central St Martin’s College of Art and Design would have been tougher for Jarvis Cocker to scan. The College later became part of the University of the Arts, London.

    The Conservative government was not finished yet, and the Education Act 1994 established the Teacher Training Agency and allowed students to opt out of students’ unions. Debbie McVitty for Wonkhe looked back on the 1990s through the lens of general election manifestos:

    “By the end of the eighties, the higher education sector as we know it today had begun to take shape. The first Research Assessment Exercise had taken place in 1986, primarily so that the University Grants Committee could draw from an evidence base in its decision about where to allocate limited research funding resources. … a new system of quality assessment had been inaugurated in 1990 under the auspices of the Committee of Vice Chancellors and Principals (CVCP) …

    Unlike Labour and the Conservatives, the Liberal Democrats have quite a lot to say about higher education in the 1992 election, pledging both to grow participation and increase flexibility”

    In 1992 the Liberal Democrats also pledged to abolish student loans … but otherwise many of their ideas “would surface in subsequent HE reforms, particularly under New Labour.” Many were optimistic: “Some might say, we will find a brighter day.”

    In UK HE, as elsewhere, quality was a prominent theme. David Watson wrote a famous paper for the Quality Assurance Agency (QAA) in 2006, Who Killed What in the Quality Wars?, about the 1990s battles involving HE institutions, QAA and HEFCE. Responding to Richard Harrison’s Wonkhe blog about those quality wars on 23 June 2025, Paul Greatrix blogged the next day about

    “… the bringing together of the established and public sector strands of UK higher education sector following the 1992 Further and Higher Education Act. Although there was, in principle, a unified HE structure after that point, it took many more years, and a great deal of argument, to establish a joined-up approach to quality assurance. But that settlement did not last and there are still major fractures in the regime …”

    It was a time, Greatrix suggested, when two became one (as the Spice Girls did not sing until 1996), but his argument was more Alanis Morrissette: “I wish nothing, but the best for you both. I’m here, to remind you of the mess you left when you went away”.

    SRHE and research into higher education in 1995

    SRHE’s chairs from 1985-1995 were Gareth Williams, Peter Knight, Susan Weil, John Sizer and Leslie Wagner. The Society’s administrator Rowland Eustace handed over in 1991 to Cynthia Iliffe; Heather Eggins then became Director in 1993. Cynthia Iliffe and Heather Eggins had both worked at CNAA, which facilitated a relocation of the SRHE office from the University of Surrey to CNAA’s base at 334-354 Gray’s Inn Road, London from 1991-1995. From the top floor at Gray’s Inn Road the Society then relocated to attic rooms in 3 Devonshire St, London, shared with the Council for Educational Technology.

    In 1993 SRHE made its first Newer Researcher Award, to Heidi Safia Mirza (then at London South Bank). For its 30th anniversary SRHE staged a debate: ‘This House Prefers Higher Education in 1995 to 1965’, proposed by Professor Graeme Davies and Baroness Pauline Perry, and opposed by Dr Peter Knight and Christopher Price. My scant notes of the occasion do not, alas, record the outcome, but say only: “Now politics is dead on the campus. Utilitarianism rules. Nationalisation produces mediocrity. Quangos quell dissent. Arid quality debate. The dull uniformity of 1995. Some students are too poor.”, which rather suggest that the opposers (both fluent and entertaining speakers) had the better of it. Whether the past or the future won, we just had to roll with it. The debate was prefaced by two short papers from Peter Scott (then at Leeds) on ‘The Shape of Higher Education to Come’, and Gareth Williams (Lancaster) on ‘ Higher Education – the Next Thirty Years’.

    The debate was followed by a series of seminars presented by the Society’s six (!) distinguished vice-presidents, Christopher Ball, Patrick Coldstream, Malcolm Frazer, Peter Swinnerton-Dyer, Ulrich Teichler and Martin Trow, and then a concluding conference. SRHE was by 1995 perhaps passing its peak of influence on policy and management in UK HE, but was also steadily growing its reach and impact on teaching and learning. The Society staged a summer conference on ‘Changing the Student Experience’, leading to the 1995 annual conference. In those days each Conference was accompanied by an edited book of Precedings: The Student Experience was edited by Suzanne Hazelgrove (Bristol). One of the contributors and conference organisers, Phil Pilkington (Coventry), later reflected on the prominent role of SRHE in focusing attention on the student experience.

    Research into higher education was still a small enough field for SRHE to produce a Register of Members’ Research Interests in 1996, including Ron Barnett (UCL) (just getting started after only his first three books), Tony Becher, Ernest Boyer, John Brennan, Sally Brown, Rob Cuthbert, Jurgen Enders, Dennis Farrington, Oliver Fulton, Mary Henkel, Maurice Kogan, Richard Mawditt, Ian McNay, David Palfreyman, Gareth Parry, John Pratt, Peter Scott (in Leeds at the time), Harold Silver, Maria Slowey, Bill Taylor, Paul Trowler, David Watson, Celia Whitchurch, Maggie Woodrow, and Mantz Yorke.  SRHE members and friends, “there for you”. But storm clouds were gathering for the Society as it entered the next, financially troubled, decade.

    If you’ve read this far I hope you’re enjoying the musical references, or perhaps objecting to them (Rob Gresham, Paul Greatrix, I’m looking at you). There will be two more blogs in this series – feel free to suggest musical connections with HE events in or around 2005 or 2015, just email me at [email protected]. Or if you want to write an alternative history blog, just do it.

    Rob Cuthbert is editor of SRHE News and the SRHE Blog, Emeritus Professor of Higher Education Management, University of the West of England and Joint Managing Partner, Practical Academics. Email [email protected]. Twitter/X @RobCuthbert. Bluesky @robcuthbert22.bsky.social.


    [1] I know this was from the 1970s, but a parody version revived it in 1995

    [2]National Advisory Body (1987) Management for a purpose Report of the Good Management Practice Group  London: NAB

    Author: SRHE News Blog

    An international learned society, concerned with supporting research and researchers into Higher Education

    Source link

  • Either the sector cleans up academic partnerships, or the government does

    Either the sector cleans up academic partnerships, or the government does

    When the franchising scandal first broke, many thought it was going to be a flash in the pan, an airing of the darkest depths of the sector but something that didn’t really impact the mainstream.

    That hasn’t been the case.

    The more it digs, the more concerned the government seems to get, and the proposed reforms to register the largest delivery partners seem unlikely to mark the end of its attention.

    Last orders

    The sector would be foolish to wait for the Government’s response to its consultation, or for the Office for Students to come knocking. Subcontracted provision in England has increased 358 per cent over the past five years: and, for some providers this provision significantly outnumbers the students they teach directly themselves. Franchised business and management provision has grown by 44 per cent, and the number of students from IMD quintile 1 (the most deprived) taught via these arrangements have increased 31 per cent, compared to an overall rise in student numbers of 15 per cent.

    The sector talks a big game about institutional autonomy – and they’re right to do so; it is a vital attribute of the UK sector. But it shouldn’t be taken for granted, and that means demonstrating clear action when practices are scrutinised.

    Front foot

    So today, QAA has released new comprehensive guidance (part of a suite sitting underneath the UK Quality Code) to help the sector get on the front foot. For the first time since the franchising scandal broke, experts from across the UK sector have developed a toolkit for anyone working in partnerships to know what good practice can look like, what questions they should be asking themselves, and how their own provision stacks up against what others are doing.

    The guidance is framed around three discrete principles: all partnerships should add direct value to the staff and student experience and widen learning opportunities; academic standards and the quality of the student experience should not be compromised; and oversight should be as rigorous, secure and open to scrutiny as the provision delivered by a single provider. All partners share responsibility for the student learning experience and the academic standards students are held to, but it is the awarding partner who is ultimately accountable for awards offered in its name.

    If you’re working in partnership management and are concerned about how your institution should be responding to the increased scrutiny coming from government, the guidance talks you through each stage of the partnership lifecycle, with reflective questions and scenarios to prompt consideration of your own practice. And as providers put the guidance and its recommendations into practice, they will be able to tell a more convincing and reassuring story about how they work with their partners to deliver a high quality experience.

    Starter for five

    But the sector getting its house in order will only quell concerns if those scrutinising feel assured of provider action. So for anyone concerned, we’ve distilled five starter questions from the guidance that we’d expect any provider to be able to answer about their partnerships.

    Are there clear and shared academic standards? Providers should be able to provide agreed terms on academic standards and quality assurance and plans for continuous improvement.

    Is oversight tailored to risk? Providers who have a large portfolio should be able to demonstrate how they take an agile, proportionate approach to each partnership.

    What are the formal governance and accountability mechanisms? A provider’s governors or board should be able to tell you what decisions have been made and why.

    How is data used to drive performance and mitigate risk? Providers should be able to tell you what data they have and what it tells them about their partnerships and the students’ experience, and any actions they plan to take.

    And finally, how does your relationship enable challenge and improvement? Providers should be able to tell you when they last spoke to each of their partners, what topics were discussed and lead providers should be able to detail what mechanisms they use to hold their partners to account when issues arise.

    Integrity and responsibility

    The government has a duty to prevent misuse of public money and to ensure the integrity of a system that receives significant amounts of it. The regulator has a responsibility to investigate where it suspects there is poor practice and to act accordingly. But the sector has a responsibility – both to its students and, also, to itself – to respond to the legitimate concerns raised around partnership provision and to demonstrate it’s taking action. This lever is just as, if not more, important, because government and regulatory action becomes more necessary and more stringent if we don’t get this right.

    The sector cannot afford not to grasp the nettle on this. Public trust, the sector’s reputation and, most importantly, the learning experience students deserve, are all on the line.

    QAA’s guidance is practical, expert-informed and rooted in shared principles to help providers not only meet expectations but lead the way in restoring confidence. Because if the sector doesn’t demonstrate its commitment to action on this, the government and the regulator surely will.

    Source link

  • Quality assurance needs consideration, not change for change’s sake

    Quality assurance needs consideration, not change for change’s sake

    It’s been a year since publication of the Behan review and six months since OfS promised to “transform” their approach to quality assessment in response. But it’s still far from clear what this looks like, or if the change is what the sector really needs.

    In proposals for a new strategy published back in December OfS suggested a refocus of regulatory activity to concentrate on three strategic priorities of quality, the wider student experience and financial resilience. But while much of the mooted activity within experience and resilience themes felt familiar, when it came to quality, more radical change was clearly on the agenda.

    The plans are heavily influenced by findings of last summer’s independent review (the Behan review). This critiqued what it saw as minimal interaction between assessment relating to baseline compliance and excellence, and recommended bringing these strands together to focus on general improvement of quality throughout the sector. In response OfS pledged to ‘transform’ quality assessment, retaining TEF at the core of an integrated approach and developing more routine and widespread activity.

    Current concerns

    Unfortunately, these bare bones proposals raised more questions about the new integrated approach than they answered and if OfS ‘recent blog update was a welcome attempt to do more in the way of delivering timely and transparent information to providers, it disappointed on detail. OfS have been discussing key issues such as the extent of integration, scope for a new TEF framework, and methods of assessment. But while a full set of proposals will be out for consultation in the autumn, in the meantime, there’s little to learn other than to expect a very different TEF which will probably operate on a rolling cycle (assessing all institutions over a four to five year period).

    The inability to cement preparations for the next TEF will cause some frustration for providers. However, if as the tone of communications suggests, OfS is aiming for more disruptive integration above an expansion of the TEF proposals may present some bigger concerns for the sector.

    A fundamental concern is whether an integrated approach aimed at driving overall improvement is the most effective way to tackle the sector’s current challenges around quality. Behan’s review warns against an overemphasis on baseline regulation, but below standard provision from a significant minority of providers is where the most acute risks to students, taxpayers and sector reputation lie (as opposed to failure to improve quality for the majority performing above the baseline). Regulation should support improvement across the board too of course.

    However, it’s not clear how shifting focus away from the former, let alone moving it within a framework designed to assess excellence periodically, will usefully help OfS tackle stubborn pockets of poor provision and emerging threats within a dynamic sector.

    There is also an obvious tension inherent in any attempt to bring baseline regulation within a rolling cycle which is manifest as soon as OfS find serious concerns about provider quality mid cycle. Here we should expect OfS to intervene with investigation and enforcement where appropriate to protect the student and wider stakeholder interest. But doing so would essentially involve regulating on minimum standards on top of a system that’s aiming to do that already as part of an integrated approach. Moreover, if whistle blowing and lead indictors which OfS seem keen to develop to alert them to issues operate effectively, and if OfS start looking seriously at franchise and potentially TNE provision, it’s easy to imagine this duplication becoming widespread.

    There is also the issue of burden for both regulator and providers which should be recognised within any significant shift in approach. For OfS there’s a question of the extent to which developing and delivering an integrated approach is hindering ongoing quality assessment. Meanwhile, getting to grips with new regulatory processes, and aligning internal approaches to quality assurance and reporting will inevitably absorb significant provider resource. At a time when pressures are profound, this is likely to be particularly unwelcome and could detract significantly from the focus on delivery and students. Ironically it’s hard to see how transformative change might not hamper the improvements in quality across the board that Behan advocates and prove somewhat counter-productive to the pursuit of OfS’ other strategic goals.

    The challenge

    It’s crucial that OfS take time to consider how best to progress with any revised approach and sector consultation throughout the process is welcome. Nevertheless, development appears to be progressing slowly and somewhat at odds with OfS’ positioning as an agile and confident regulator operating in a dynamic landscape. Maybe this should tell us something about the difficulties inherent in developing an integrated approach.

    There’s much to admire about the Behan review and OfS’ responsiveness to the recommendations is laudable. But while Behan looks to the longer term, I’m not convinced that in the current climate there’s much wrong with the idea of maintaining the incumbent framework.

    Let’s not forget that this was established by OfS only three years ago following significant development and consultation to ensure a judicious approach.

    I wonder if the real problem here is that, in contrast to a generally well received TEF (and as Behan highlights), OfS’ work on baseline quality regulation simply hasn’t progressed with the speed, clarity and bite that was anticipated and necessary to drive positive change above the minimum were needed. And I wonder if a better solution to pressing quality concerns would be for OfS to concentrate resources on improving operation of the current framework. There certainly feels room to deliver more, more responsive, more transparent and more impactful baseline investigations without radical change. At the same time, the feat of maintaining a successful and much expanded TEF seems much more achievable without bringing a significant amount of assurance activity within its scope.

    We may yet see a less intrusive approach to integration proposed by OfS. I think this could be a better way forward – less burdensome and more suited to the sector’s current challenges. As the regulator reflects on their approach over the summer with a new chair at the helm who’s closer to the provider perspective and more distanced from the independent review, perhaps this is one which they will lean towards.

    Source link