Category: Quality

  • TEF6: the incredible machine takes over quality assurance regulation

    TEF6: the incredible machine takes over quality assurance regulation

    If you loved the Teaching Excellence Framework, were thrilled by the outcomes (B3) thresholds, lost your mind for the Equality of Opportunity Risk Register, and delighted to the sporadic risk-based OfS investigations based on years-old data you’ll find a lot to love in the latest set of Office for Students proposals on quality assurance.

    In today’s Consultation on the future approach to quality regulation you’ll find a cyclical, cohort based TEF that also includes a measurement (against benchmarks) of compliance with the thresholds for student outcomes inscribed in the B3 condition. Based on the outcomes of this super-TEF and prioritised based on assessment of risk, OfS will make interventions (including controls on recruitment and the conditions of degree awarding powers) and targeted investigation. This is a first stage consultation only, stage two will come in August 2026.

    It’s not quite a grand unified theory: we don’t mix in the rest of the B conditions (covering less pressing matters like academic standards, the academic experience, student support, assessment) because, in the words of OfS:

    Such an approach would be likely to involve visits to all providers, to assess whether they meet all the relevant B conditions of registration

    The students who are struggling right now with the impacts of higher student/staff ratios and a lack of capacity due to over-recruitment will greatly appreciate this reduction in administrative burden.

    Where we left things

    When we last considered TEF we were expecting an exercise every four years, drawing on provider narrative submissions (which included a chunk on a provider’s own definition and measurement of educational gain), students’ union narrative submissions, and data on outcomes and student satisfactions. Providers were awarded a “medal” for each of student outcomes and student experience – a matrix determined whether this resulted in an overall Bronze, Silver, Gold or Requires Improvement.

    The first three of these awards were deemed to be above minimum standards (with slight differences between each), while the latter was a portal to the much more punitive world of regulation under group B (student experience) conditions of registration. Most of the good bits of this approach came from the genuinely superb Pearce Review of TEF conducted under section 26 of the Higher Education and Research Act, which fixed a lot of the statistical and process nonsense that had crept in under previous iterations and then-current plans (though not every recommendation was implemented).

    TEF awards were last made in 2023, with the next iteration – involving all registered providers plus anyone else who wanted to play along – was due in 2027.

    Perma-TEF

    A return to a rolling TEF rather than a quadrennial quality enhancement jamboree means a pool of TEF assessors rather than a one-off panel. There will be steps taken to ensure that an appropriate group of academic and student assessors is selected to assess each cohort – there will be special efforts made to use those with experience of smaller, specialist, and college-based providers – and a tenure of two-to-three years is planned. OfS is also considering whether its staff can be included among the storied ranks of those empowered to facilitate ratings decisions.

    Likewise, we’ll need a more established appeals system. Open only to those with Bronze or Needs Improvement ratings (Gold and Silver are passing grades) it would be a way to potentially forestall engagement and investigations based on an active risk to student experience or outcomes, or a risk of a future breach of a condition of registration for Bronze or Requires Improvement.

    Each provider would be assessed once every three years – all providers taking part in the first cycle would be assessed in either 2027-28, 2028-29, or 2029-30 (which covers only undergraduate students because there’s no postgraduate NSS yet – OfS plan to develop one before 2030). In many cases they’ll only know which one at the start of the academic year in question, which will give them six months to get their submissions sorted.

    Because Bronze is now bad (rather than “good but not great” as it used to be) the first year’s could well include all providers with a 2023 Bronze (or Requires Improvement) rating, plus some with increased risks of non-compliance, some with Bronze in one of the TEF aspects, and some without a rating.

    After this, how often you are assessed depends on your rating – if you are Gold overall it is five years till the next try, Silver means four years, and Bronze three (if you are “Requires Improvement” you probably have other concerns beyond the date of your next assessment) but this can be tweaked if OfS decides there is an increased risk to quality or for any other reason.

    Snakes and ladders

    Ignore the gradations and matrices in the Pearce Review – the plan now is that your lowest TEF aspect rating (remember you got sub-awards last time for student experience and student outcomes) will be your overall rating. So Silver for experience and Bronze for outcomes makes for an overall Bronze. As OfS has decided that you now have to pay (likely around £25,000) to enter what is a compulsory exercise this is a cost that could lead to a larger cost in future.

    In previous TEFs, the only negative consequence for those outside of the top ratings have been reputational – a loss of bragging rights of, arguably, negligible value. The new proposals align Bronze with the (B3) minimum required standards and put Requires Improvement below these: in the new calculus of value the minimum is not good enough and there will be consequences.

    We’ve already had some hints that a link to fee cap levels is back on the cards, but in the meantime OfS is pondering a cap on student numbers expansion to punish those who turn out Bronze or Requires Improvement. The workings of the expansion cap will be familiar to those who recall the old additional student numbers process – increases of more than five per cent (the old tolerance band, which is still a lot) would not be permitted for poorly rated providers.

    For providers without degree awarding powers it is unlikely they will be successful in applying for them with Bronze and below – but OfS is also thinking about restricting aspects of existing providers DAPs, for example limiting their ability to subcontract or franchise provision in future. This is another de facto numbers cap in many cases, and is all ahead of a future consultation on DAPs that could make for an even closer link with TEF.

    Proposals for progression

    Proposal 6 will simplify the existing B3 thresholds, and integrate the way they are assessed into the TEF process. In a nutshell, the progression requirement for B3 would disappear – with the assessment made purely on continuation and completion, with providers able to submit contextual and historic information to explain why performance is not above the benchmark or threshold as a part of the TEF process.

    Progression will still be considered at the higher levels of TEF, and here contextual information can play more of a part – with what I propose we start calling the Norland Clause allowing providers to submit details of courses that lead to jobs that ONS does not consider as professional or managerial. That existing indicator will be joined by another based on (Graduate Outcomes) graduate reflections on how they are using what they have learned, and benchmarked salaries three years after graduation from DfE’s Longitudinal Educational Outcomes (LEO) data – in deference to that random Kemi Badenoch IFS commission at the tail end of the last parliament.

    Again, there will be contextual benchmarks for these measures (and hopefully some hefty caveating on the use of LEO median salaries) – and, as is the pattern in this consultation, there are detailed proposals to follow.

    Marginal gains, marginal losses

    The “educational gains” experiment, pioneered in the last TEF, is over: making this three times that a regulator in England has tried and failed to include a measure of learning gain in some form of regulation. OfS is still happy for you to mention your education gain work in your next narrative submission, but it isn’t compulsory. The reason: reducing burden, and a focus on comparability rather than a diversity of bespoke measures.

    Asking providers what something means in their context, rather than applying a one-size-fits-all measure of student success was an immensely powerful component of the last exercise. Providers who started on that journey at considerable expense in data gathering and analysis may be less than pleased at this latest development – and we’d certainly understood that DfE were fans of the approach too.

    Similarly, the requirement for students to feed back on students in their submissions to TEF has been removed. The ostensible reason is that students found it difficult last time round – the result is that insight from the valuable networks between existing students and their recently graduated peers is lost. The outcomes end of TEF is now very much data driven with only the chance to explain unusual results offered. It’s a retreat from some of the contextual sense that crept in with the Pearce Review.

    Business as usual

    Even though TEF now feels like it is everywhere and for always, there’s still a place for OfS’ regular risk-based monitoring – and annex I (yes, there’s that many annexes) contains a useful draft monitoring tool.

    Here it is very good to see staff:student ratios, falling entry requirements, a large growth in foundation year provision, and a rapid growth in numbers among what are noted as indicators of risk to the student experience. It is possible to examine an excellent system designed outside of the seemingly inviolate framework of the TEF where events like this would trigger an investigation of provider governance and quality assurance processes.

    Alas, the main use of this monitoring is to decide whether or not to bring a TEF assessment forward, something that punts an immediate risk to students into something that will be dealt with retrospectively. If I’m a student on a first year that has ballooned from 300 to 900 from one cycle to the next there is a lot of good a regulator can do by acting quickly – I am unlikely to care whether a Bronze or Silver award is made in a couple of years’ time.

    International principles

    One of the key recommendations of the Behan review on quality was a drawing together of the various disparate (and, yes, burdensome) streams of quality and standards assurance and enhancement into a unified whole. We obviously don’t quite get there – but there has been progress made towards another key sector bugbear that came up both in Behan and the Lords’ Industry and Regulators Committee review: adherence to international quality assurance standards (to facilitate international partnerships and, increasingly, recruitment).

    OfS will “work towards applying to join the European Quality Assurance Register for Higher Education” at the appropriate time – clearly feeling that the long overdue centring of the student voice in quality assurance (there will be an expanded role for and range of student assessors) and the incorporation of a cyclical element (to desk assessments at least) is enough to get them over the bar.

    It isn’t. Principle 2.1 of the EQAR ESG requires that “external quality assurance should address the effectiveness of the internal quality assurance processes” – philosophically establishing the key role of providers themselves in monitoring and upholding the quality of their own provision, with the external assurance process primarily assessing whether (and how well) this has been done. For whatever reason OfS believes the state (in the form of the regulator) needs to be (and is capable of being!) responsible for all, quality assurance everywhere, all the time. It’s a glaring weakness of the OfS system that urgently needs to be addressed. And it hasn’t been, this time.

    The upshot is that while the new system looks ESG-ish, it is unlikely to be judged to be in full compliance.

    Single word judgements

    The recent use of single headline judgements of educational quality being used in ways that have far reaching regulatory implications is hugely problematic. The government announced the abandonment of the old “requires improvement, inadequate, good, and outstanding” judgements for schools in favour of a more nuanced “report card approach” – driven in part by the death by suicide of headteacher Ruth Perry in 2023. The “inadequate” rating given to her Cavendish Primary School would have meant forced academisation and deeper regulatory oversight.

    Regulation and quality assurance in education needs to be rigorous and reliable – it also needs to be context-aware and focused on improvement rather than retribution. Giving single headline grades cute, Olympics-inspired names doesn’t really cut it – and as we approach the fifth redesign of an exercise that has only run six times since 2016 you would perhaps think that rather harder questions need to be asked about the value (and cost!) of this undertaking.

    If we want to assess and control the risks of modular provision, transnational education, rapid expansion, and a growing number of innovations in delivery we need providers as active partners in the process. If we want to let universities try new things we need to start from a position that we can trust universities to have a focus on the quality of the student experience that is robust and transparent. We are reaching the limits of the current approach. Bad actors will continue to get away with poor quality provision – students won’t see timely regulatory action to prevent this – and eventually someone is going to get hurt.

    Source link

  • Back to the future for the TEF? Back to school for OfS?

    Back to the future for the TEF? Back to school for OfS?

    As the new academic year dawns, there is a feeling of “back to the future” for the Teaching Excellent Framework (TEF).

    And it seems that the Office for Students (OfS) needs to go “back to school” in its understanding of the measurement of educational quality.

    Both of these feelings come from the OfS Chair’s suggestion that the level of undergraduate tuition fees institutions can charge may be linked to institutions’ TEF results.

    For those just joining us on TEF-Watch, this is where the TEF began back in the 2015 Green Paper.

    At that time, the idea of linking tuition fees to the TEF’s measure of quality was dropped pretty quickly because it was, and remains, totally unworkable in any fair and reasonable way.

    This is for a number of reasons that would be obvious to anyone who has a passing understanding of how the TEF measures educational quality, which I wrote about on Wonkhe at the time.

    Can’t work, won’t work

    First, the TEF does not measure the quality of individual degree programmes. It evaluates, in a fairly broad-brush way, a whole institution’s approach to teaching quality and related outcomes. All institutions have programmes of variable quality.

    This means that linking tuition fees to TEF outcomes could lead to significant numbers of students on lower quality programmes being charged the higher rate of tuition fees.

    Second, and even more unjustly, the TEF does not give any indication of the quality of education that students will directly experience.

    Rather, when they are applying for their degree programme, it provides a measure of an institution’s general teaching quality at the time of its last TEF assessment.

    Under the plans currently being considered for a rolling TEF, this could be up to five years previously – which would mean it gives a view of educational quality at least nine years before applicants will graduate. Even if it was from the year before they enrol, it will be based on an assessment of evidence that took place at least four years before they will complete their degree programme.

    Those knowledgeable about educational quality understand that, over such a time span, educational quality could have dramatically changed. Given this, on what basis can it be fair for new students to be charged the higher rate of tuition fees as a result of a general quality of education enjoyed by their predecessors?

    These two reasons would make a system in which tuition fees were linked to TEF outcomes incredibly unfair. And that is before we even consider its impact on the TEF as a valid measure of educational quality.

    The games universities play

    The higher the stakes in the TEF, the more institutions will feel forced to game the system. In the current state of financial crisis, any institutional leader is likely to feel almost compelled to pull every trick in the book in order to ensure the highest possible tuition fee income for their institution.

    How could they not given that it could make the difference between institutional survival, a forced merger or the potential closure of their institution? This would make the TEF even less of an effective measure of educational quality and much more of a measure of how effectively institutions can play the system.

    It takes very little understanding of such processes to see that institutions with the greatest resources will be in by far the best position to finance the playing of such games. Making the stakes so high for institutions would also remove any incentive for them to use the TEF as an opportunity to openly identify educational excellence and meaningfully reflect on their educational quality.

    This would mean that the TEF loses any potential to meet its core purpose, identified by the Independent Review of the TEF, “to identify excellence and encourage enhancement”. It will instead become even more of a highly pressurised marketing exercise with the TEF outcomes having potentially profound consequences for the future survival of some institutions.

    In its own terms, the suggestion about linking undergraduate tuition fees to TEF outcomes is nothing to worry about. It simply won’t happen. What is a much greater concern is that the OfS is publicly making this suggestion at a time when it is claiming it will work harder to advocate for the sector as a force for good, and also appears to have an insatiable appetite to dominate the measurement of educational quality in English higher education.

    Any regulator that had the capacity and expertise to do either of these things would simply not be making such a suggestion at any time but particularly not when the sector faces such a difficult financial outlook.

    An OfS out of touch with its impact on the sector. Haven’t we been here before?

    Source link

  • Catapult Learning is Awarded Tutoring Program Design Badge from Stanford University’s National Student Support Accelerator

    Catapult Learning is Awarded Tutoring Program Design Badge from Stanford University’s National Student Support Accelerator

    Organization recognized for excellence in high-impact tutoring design and student achievement gains

    PHILADELPHIA, Aug. 25, 2025 – Catapult Learning, a division of FullBloom that provides academic intervention programs for students and professional development solutions for teachers in K-12 schools, today announced it earned the Tutoring Program Design Badge from the National Student Support Accelerator (NSSA) at Stanford University. The designation, valid for three years, recognizes tutoring providers that demonstrate high-quality, research-aligned program design.

    The recognition comes at a time when the need for high-impact tutoring (HIT) has never been greater. As schools nationwide work to close learning gaps that widened during the COVID-19 pandemic and accelerate recovery, Catapult Learning stands out for its nearly 50-year legacy of delivering effective academic support to students who need it most.

    “Catapult Learning is honored to receive this prestigious national recognition from the NSSA at Stanford University,” said Rob Klapper, president at Catapult Learning. “We are excited to be recognized for our high-impact tutoring program design and will continue to uphold the highest standards of excellence as we support learners across the country.” 

    Each year, Catapult Learning’s programs support more than 150,000+ students with nearly four million in-person tutoring sessions, in partnership with 2,100 schools and districts nationwide. Its tutors, many of whom hold four-year degrees, are highly trained professionals who are supported with ongoing coaching and professional development.

    Recent data from Catapult Learning’s HIT programs show strong academic gains across both math and reading subject areas:

    • 8 out of every 10 math students increased their pre/post score
    • 9 out of every 10 reading students increased their pre/post score

    These results come from programs that have also earned a Tier 2 evidence designation under the Every Student Succeeds Act, affirming their alignment with rigorous research standards. 

    The Badge was awarded following a rigorous, evidence-based review conducted by an independent panel of education experts. The NSSA evaluated multiple components of Catapult Learning’s program – including instructional design, tutor training and support, and the use of data to inform instruction – against its Tutoring Quality Standards.

    “This designation underscores the strength and intentionality behind our high-impact tutoring model,” said Devon Wible, vice president of teaching and learning at Catapult Learning. “This achievement reflects our deep commitment to providing high-quality, research-based tutoring that drives meaningful outcomes for learners.”

    Tutoring is available in person, virtually, or in hybrid formats, and can be scheduled before, during, or after school, including weekends. Sessions are held a minimum of three times per week, with flexible options tailored to the needs of each school or district. Catapult Learning provides all necessary materials for both students and tutors.

    To learn more about Catapult Learning’s high-impact tutoring offerings, visit: https://catapultlearning.com/high-impact-tutoring/.

    About Catapult Learning

    Catapult Learning, a division of FullBloom, provides academic intervention programs for students and professional development solutions for teachers in K-12 schools, executed by a team of experienced coaches. Our professional development services strengthen the capacity of teachers and leaders to raise and sustain student achievement. Our academic intervention programs support struggling learners with instruction tailored to the unique needs of each student. Across the country, Catapult Learning partners with 500+ school districts to produce positive outcomes that promote academic and professional growth. Catapult Learning is accredited by Cognia and has earned its 2022 System of Distinction honor.  

    Latest posts by eSchool News Contributor (see all)

    Source link

  • The Society for Research into Higher Education in 1995

    The Society for Research into Higher Education in 1995

    by Rob Cuthbert

    In SRHE News and Blog a series of posts is chronicling, decade by decade, the progress of SRHE since its foundation 60 years ago in 1965. As always, our memories are supported by some music of the times.

    1995 was the year of the war in Bosnia and the Srebrenica massacre, the collapse of Barings Bank, and the Oklahoma Bombing. OJ Simpson was found not guilty of murder. US President Bill Clinton visited Ireland. President Nelson Mandela celebrated as South Africa won the Rugby World Cup, Blackburn Rovers won the English Premier League. Cliff Richard was knighted, Blur-v-Oasis fought the battle of Britpop, and Robbie Williams left Take That, causing heartache for millions. John Major was UK Prime Minister and saw off an internal party challenge to be re-elected as leader of the Conservative Party. It would be two years until D-Ream sang ‘Things can only get better’ as the theme tune for the election of New Labour in 1997. Microsoft released Windows 95, and Bill Gates became the world’s richest man. Media, news and communication had not yet been revolutionised by the internet.

    Higher education in 1995

    Higher education everywhere had been much changed in the preceding decade, not least in the UK, where the binary policy had ultimately proved vulnerable: The Polytechnic Experiment ended in 1992. Lee Harvey, the long-time editor of Quality in Higher Education, and his co-author Berit Askling (Gothenburg) argued that in retrospect:

    “The 1990s has been the decade of quality in higher education. There had been mechanisms for ensuring the quality of higher education for decades prior to the 1990s, including the external examiner system in the UK and other Commonwealth countries, the American system of accreditation, and government ministerial control in much of Europe and elsewhere in the world. The 1990s, though, saw a change in the approach to higher education quality.”

    In his own retrospective for the European Journal of Education on the previous decade of ‘interesting times’, Guy Neave (Twente) agreed there had been a ‘frenetic pace of adjustment’ but

    “Despite all that is said about the drive towards quality, enterprise, efficiency and accountability and despite the attention lavished on devising the mechanics of their operation, this revolution in institutional efficiency has been driven by the political process.”

    Europe saw institutional churn with the formation of many new university institutions – over 60 in Russia during 1985-1995 in the era of glasnost, and many others elsewhere, including Dublin City University and University of Limerick in 1989. Dublin Institute of Technology, created in 1992, would spend 24 years just waiting for the chance[1] to become a technological university. 1995 saw the establishment of Aalborg in Denmark and several new Chinese universities including Guangdong University of Technology.

    UK HE in 1995

    In the UK the HE participation rate had more than doubled between 1970 (8.4%) and 1990 (19.4%) and then it grew even faster, reaching 33% by 2000. At the end of 1994-1995 there were almost 950,000 full-time students in UK HE. Michael Shattock’s 1995 paper ‘British higher education in 2025’ fairly accurately predicted a 55% APR by 2025.

    There had been seismic changes to UK HE in the 1980s and early 1990s. Polytechnic directors had for some years been lobbying for an escape from unduly restrictive local authority bureaucratic controls, under which many institutions had, for example, not even been allowed to hold bank accounts in their own names. Even so, the National Advisory Body for Public Sector HE (NAB), adroitly steered by its chair Christopher Ball (Warden of Keble) and chief executive John Bevan, previously Director of Education for the Inner London Education Authority, had often outmanoeuvred the University Grants Committee (UGC) led by Peter Swinnerton-Dyer (Cambridge). By developing the idea of the ‘teaching unit of resource’ NAB had arguably embarrassed the UGC into an analysis which declared that universities were slightly less expensive for teaching, and the (significant) difference was the amount spent on research – hence determining the initial size of total research funding, then called QR.

    Local authorities realised too slowly that controlling large polytechnics as if they were schools was not appropriate. Their attempt to head off reforms was articulated in Management for a Purpose[2], a report on Good Management Practice (GMP) prepared under the auspices of NAB, which aimed to retain local authority strategic control of the institutions which they had, after all, created and developed. It was too little, too late. (I was joint secretary to the GMP group: I guess, now it’s time, for me to give up.) Secretary of State Kenneth Baker’s 1987 White Paper Higher Education: Meeting the Challenge was followed rapidly by the so-called ‘Great Education Reform Bill’, coming onto the statute book as the Education Reform Act 1988. The Act took the polytechnics out of local authorities, recreating them as independent higher education corporations; it dissolved the UGC and NAB and set up the Universities Funding Council (UFC) and the Polytechnics and Colleges Funding Council (PCFC). Local authorities were left high and dry and government didn’t think twice, with the inevitable progression to the Further and Higher Education Act 1992. The 1992 Act dissolved PCFC and UFC and set up Higher Education Funding Councils for England (HEFCE) and Wales (HEFCW). It also set up a new Further Education Funding Council (FEFC) for colleges reconstituted as FE corporations and dissolved the Council for National Academic Awards. The Smashing Pumpkins celebrated “the resolute urgency of now”, FE and HE had “come a long way” but Take That sensibly advised “Never forget where you’ve come here from”,

    Crucially, the Act allowed polytechnics to take university titles, subject to the approval of the Privy Council, and eventually 40 institutions did so in England, Wales and Scotland. In addition Cranfield was established by Royal Charter in 1993, and the University of Manchester Institute of Science and Technology became completely autonomous in 1994. The biggest hit in 1995 actually named an HE institution and its course, as Pulp sang: “She studied sculpture at St Martin’s College”. Not its proper name, but Central St Martin’s College of Art and Design would have been tougher for Jarvis Cocker to scan. The College later became part of the University of the Arts, London.

    The Conservative government was not finished yet, and the Education Act 1994 established the Teacher Training Agency and allowed students to opt out of students’ unions. Debbie McVitty for Wonkhe looked back on the 1990s through the lens of general election manifestos:

    “By the end of the eighties, the higher education sector as we know it today had begun to take shape. The first Research Assessment Exercise had taken place in 1986, primarily so that the University Grants Committee could draw from an evidence base in its decision about where to allocate limited research funding resources. … a new system of quality assessment had been inaugurated in 1990 under the auspices of the Committee of Vice Chancellors and Principals (CVCP) …

    Unlike Labour and the Conservatives, the Liberal Democrats have quite a lot to say about higher education in the 1992 election, pledging both to grow participation and increase flexibility”

    In 1992 the Liberal Democrats also pledged to abolish student loans … but otherwise many of their ideas “would surface in subsequent HE reforms, particularly under New Labour.” Many were optimistic: “Some might say, we will find a brighter day.”

    In UK HE, as elsewhere, quality was a prominent theme. David Watson wrote a famous paper for the Quality Assurance Agency (QAA) in 2006, Who Killed What in the Quality Wars?, about the 1990s battles involving HE institutions, QAA and HEFCE. Responding to Richard Harrison’s Wonkhe blog about those quality wars on 23 June 2025, Paul Greatrix blogged the next day about

    “… the bringing together of the established and public sector strands of UK higher education sector following the 1992 Further and Higher Education Act. Although there was, in principle, a unified HE structure after that point, it took many more years, and a great deal of argument, to establish a joined-up approach to quality assurance. But that settlement did not last and there are still major fractures in the regime …”

    It was a time, Greatrix suggested, when two became one (as the Spice Girls did not sing until 1996), but his argument was more Alanis Morrissette: “I wish nothing, but the best for you both. I’m here, to remind you of the mess you left when you went away”.

    SRHE and research into higher education in 1995

    SRHE’s chairs from 1985-1995 were Gareth Williams, Peter Knight, Susan Weil, John Sizer and Leslie Wagner. The Society’s administrator Rowland Eustace handed over in 1991 to Cynthia Iliffe; Heather Eggins then became Director in 1993. Cynthia Iliffe and Heather Eggins had both worked at CNAA, which facilitated a relocation of the SRHE office from the University of Surrey to CNAA’s base at 334-354 Gray’s Inn Road, London from 1991-1995. From the top floor at Gray’s Inn Road the Society then relocated to attic rooms in 3 Devonshire St, London, shared with the Council for Educational Technology.

    In 1993 SRHE made its first Newer Researcher Award, to Heidi Safia Mirza (then at London South Bank). For its 30th anniversary SRHE staged a debate: ‘This House Prefers Higher Education in 1995 to 1965’, proposed by Professor Graeme Davies and Baroness Pauline Perry, and opposed by Dr Peter Knight and Christopher Price. My scant notes of the occasion do not, alas, record the outcome, but say only: “Now politics is dead on the campus. Utilitarianism rules. Nationalisation produces mediocrity. Quangos quell dissent. Arid quality debate. The dull uniformity of 1995. Some students are too poor.”, which rather suggest that the opposers (both fluent and entertaining speakers) had the better of it. Whether the past or the future won, we just had to roll with it. The debate was prefaced by two short papers from Peter Scott (then at Leeds) on ‘The Shape of Higher Education to Come’, and Gareth Williams (Lancaster) on ‘ Higher Education – the Next Thirty Years’.

    The debate was followed by a series of seminars presented by the Society’s six (!) distinguished vice-presidents, Christopher Ball, Patrick Coldstream, Malcolm Frazer, Peter Swinnerton-Dyer, Ulrich Teichler and Martin Trow, and then a concluding conference. SRHE was by 1995 perhaps passing its peak of influence on policy and management in UK HE, but was also steadily growing its reach and impact on teaching and learning. The Society staged a summer conference on ‘Changing the Student Experience’, leading to the 1995 annual conference. In those days each Conference was accompanied by an edited book of Precedings: The Student Experience was edited by Suzanne Hazelgrove (Bristol). One of the contributors and conference organisers, Phil Pilkington (Coventry), later reflected on the prominent role of SRHE in focusing attention on the student experience.

    Research into higher education was still a small enough field for SRHE to produce a Register of Members’ Research Interests in 1996, including Ron Barnett (UCL) (just getting started after only his first three books), Tony Becher, Ernest Boyer, John Brennan, Sally Brown, Rob Cuthbert, Jurgen Enders, Dennis Farrington, Oliver Fulton, Mary Henkel, Maurice Kogan, Richard Mawditt, Ian McNay, David Palfreyman, Gareth Parry, John Pratt, Peter Scott (in Leeds at the time), Harold Silver, Maria Slowey, Bill Taylor, Paul Trowler, David Watson, Celia Whitchurch, Maggie Woodrow, and Mantz Yorke.  SRHE members and friends, “there for you”. But storm clouds were gathering for the Society as it entered the next, financially troubled, decade.

    If you’ve read this far I hope you’re enjoying the musical references, or perhaps objecting to them (Rob Gresham, Paul Greatrix, I’m looking at you). There will be two more blogs in this series – feel free to suggest musical connections with HE events in or around 2005 or 2015, just email me at [email protected]. Or if you want to write an alternative history blog, just do it.

    Rob Cuthbert is editor of SRHE News and the SRHE Blog, Emeritus Professor of Higher Education Management, University of the West of England and Joint Managing Partner, Practical Academics. Email [email protected]. Twitter/X @RobCuthbert. Bluesky @robcuthbert22.bsky.social.


    [1] I know this was from the 1970s, but a parody version revived it in 1995

    [2]National Advisory Body (1987) Management for a purpose Report of the Good Management Practice Group  London: NAB

    Author: SRHE News Blog

    An international learned society, concerned with supporting research and researchers into Higher Education

    Source link

  • Either the sector cleans up academic partnerships, or the government does

    Either the sector cleans up academic partnerships, or the government does

    When the franchising scandal first broke, many thought it was going to be a flash in the pan, an airing of the darkest depths of the sector but something that didn’t really impact the mainstream.

    That hasn’t been the case.

    The more it digs, the more concerned the government seems to get, and the proposed reforms to register the largest delivery partners seem unlikely to mark the end of its attention.

    Last orders

    The sector would be foolish to wait for the Government’s response to its consultation, or for the Office for Students to come knocking. Subcontracted provision in England has increased 358 per cent over the past five years: and, for some providers this provision significantly outnumbers the students they teach directly themselves. Franchised business and management provision has grown by 44 per cent, and the number of students from IMD quintile 1 (the most deprived) taught via these arrangements have increased 31 per cent, compared to an overall rise in student numbers of 15 per cent.

    The sector talks a big game about institutional autonomy – and they’re right to do so; it is a vital attribute of the UK sector. But it shouldn’t be taken for granted, and that means demonstrating clear action when practices are scrutinised.

    Front foot

    So today, QAA has released new comprehensive guidance (part of a suite sitting underneath the UK Quality Code) to help the sector get on the front foot. For the first time since the franchising scandal broke, experts from across the UK sector have developed a toolkit for anyone working in partnerships to know what good practice can look like, what questions they should be asking themselves, and how their own provision stacks up against what others are doing.

    The guidance is framed around three discrete principles: all partnerships should add direct value to the staff and student experience and widen learning opportunities; academic standards and the quality of the student experience should not be compromised; and oversight should be as rigorous, secure and open to scrutiny as the provision delivered by a single provider. All partners share responsibility for the student learning experience and the academic standards students are held to, but it is the awarding partner who is ultimately accountable for awards offered in its name.

    If you’re working in partnership management and are concerned about how your institution should be responding to the increased scrutiny coming from government, the guidance talks you through each stage of the partnership lifecycle, with reflective questions and scenarios to prompt consideration of your own practice. And as providers put the guidance and its recommendations into practice, they will be able to tell a more convincing and reassuring story about how they work with their partners to deliver a high quality experience.

    Starter for five

    But the sector getting its house in order will only quell concerns if those scrutinising feel assured of provider action. So for anyone concerned, we’ve distilled five starter questions from the guidance that we’d expect any provider to be able to answer about their partnerships.

    Are there clear and shared academic standards? Providers should be able to provide agreed terms on academic standards and quality assurance and plans for continuous improvement.

    Is oversight tailored to risk? Providers who have a large portfolio should be able to demonstrate how they take an agile, proportionate approach to each partnership.

    What are the formal governance and accountability mechanisms? A provider’s governors or board should be able to tell you what decisions have been made and why.

    How is data used to drive performance and mitigate risk? Providers should be able to tell you what data they have and what it tells them about their partnerships and the students’ experience, and any actions they plan to take.

    And finally, how does your relationship enable challenge and improvement? Providers should be able to tell you when they last spoke to each of their partners, what topics were discussed and lead providers should be able to detail what mechanisms they use to hold their partners to account when issues arise.

    Integrity and responsibility

    The government has a duty to prevent misuse of public money and to ensure the integrity of a system that receives significant amounts of it. The regulator has a responsibility to investigate where it suspects there is poor practice and to act accordingly. But the sector has a responsibility – both to its students and, also, to itself – to respond to the legitimate concerns raised around partnership provision and to demonstrate it’s taking action. This lever is just as, if not more, important, because government and regulatory action becomes more necessary and more stringent if we don’t get this right.

    The sector cannot afford not to grasp the nettle on this. Public trust, the sector’s reputation and, most importantly, the learning experience students deserve, are all on the line.

    QAA’s guidance is practical, expert-informed and rooted in shared principles to help providers not only meet expectations but lead the way in restoring confidence. Because if the sector doesn’t demonstrate its commitment to action on this, the government and the regulator surely will.

    Source link

  • Quality assurance needs consideration, not change for change’s sake

    Quality assurance needs consideration, not change for change’s sake

    It’s been a year since publication of the Behan review and six months since OfS promised to “transform” their approach to quality assessment in response. But it’s still far from clear what this looks like, or if the change is what the sector really needs.

    In proposals for a new strategy published back in December OfS suggested a refocus of regulatory activity to concentrate on three strategic priorities of quality, the wider student experience and financial resilience. But while much of the mooted activity within experience and resilience themes felt familiar, when it came to quality, more radical change was clearly on the agenda.

    The plans are heavily influenced by findings of last summer’s independent review (the Behan review). This critiqued what it saw as minimal interaction between assessment relating to baseline compliance and excellence, and recommended bringing these strands together to focus on general improvement of quality throughout the sector. In response OfS pledged to ‘transform’ quality assessment, retaining TEF at the core of an integrated approach and developing more routine and widespread activity.

    Current concerns

    Unfortunately, these bare bones proposals raised more questions about the new integrated approach than they answered and if OfS ‘recent blog update was a welcome attempt to do more in the way of delivering timely and transparent information to providers, it disappointed on detail. OfS have been discussing key issues such as the extent of integration, scope for a new TEF framework, and methods of assessment. But while a full set of proposals will be out for consultation in the autumn, in the meantime, there’s little to learn other than to expect a very different TEF which will probably operate on a rolling cycle (assessing all institutions over a four to five year period).

    The inability to cement preparations for the next TEF will cause some frustration for providers. However, if as the tone of communications suggests, OfS is aiming for more disruptive integration above an expansion of the TEF proposals may present some bigger concerns for the sector.

    A fundamental concern is whether an integrated approach aimed at driving overall improvement is the most effective way to tackle the sector’s current challenges around quality. Behan’s review warns against an overemphasis on baseline regulation, but below standard provision from a significant minority of providers is where the most acute risks to students, taxpayers and sector reputation lie (as opposed to failure to improve quality for the majority performing above the baseline). Regulation should support improvement across the board too of course.

    However, it’s not clear how shifting focus away from the former, let alone moving it within a framework designed to assess excellence periodically, will usefully help OfS tackle stubborn pockets of poor provision and emerging threats within a dynamic sector.

    There is also an obvious tension inherent in any attempt to bring baseline regulation within a rolling cycle which is manifest as soon as OfS find serious concerns about provider quality mid cycle. Here we should expect OfS to intervene with investigation and enforcement where appropriate to protect the student and wider stakeholder interest. But doing so would essentially involve regulating on minimum standards on top of a system that’s aiming to do that already as part of an integrated approach. Moreover, if whistle blowing and lead indictors which OfS seem keen to develop to alert them to issues operate effectively, and if OfS start looking seriously at franchise and potentially TNE provision, it’s easy to imagine this duplication becoming widespread.

    There is also the issue of burden for both regulator and providers which should be recognised within any significant shift in approach. For OfS there’s a question of the extent to which developing and delivering an integrated approach is hindering ongoing quality assessment. Meanwhile, getting to grips with new regulatory processes, and aligning internal approaches to quality assurance and reporting will inevitably absorb significant provider resource. At a time when pressures are profound, this is likely to be particularly unwelcome and could detract significantly from the focus on delivery and students. Ironically it’s hard to see how transformative change might not hamper the improvements in quality across the board that Behan advocates and prove somewhat counter-productive to the pursuit of OfS’ other strategic goals.

    The challenge

    It’s crucial that OfS take time to consider how best to progress with any revised approach and sector consultation throughout the process is welcome. Nevertheless, development appears to be progressing slowly and somewhat at odds with OfS’ positioning as an agile and confident regulator operating in a dynamic landscape. Maybe this should tell us something about the difficulties inherent in developing an integrated approach.

    There’s much to admire about the Behan review and OfS’ responsiveness to the recommendations is laudable. But while Behan looks to the longer term, I’m not convinced that in the current climate there’s much wrong with the idea of maintaining the incumbent framework.

    Let’s not forget that this was established by OfS only three years ago following significant development and consultation to ensure a judicious approach.

    I wonder if the real problem here is that, in contrast to a generally well received TEF (and as Behan highlights), OfS’ work on baseline quality regulation simply hasn’t progressed with the speed, clarity and bite that was anticipated and necessary to drive positive change above the minimum were needed. And I wonder if a better solution to pressing quality concerns would be for OfS to concentrate resources on improving operation of the current framework. There certainly feels room to deliver more, more responsive, more transparent and more impactful baseline investigations without radical change. At the same time, the feat of maintaining a successful and much expanded TEF seems much more achievable without bringing a significant amount of assurance activity within its scope.

    We may yet see a less intrusive approach to integration proposed by OfS. I think this could be a better way forward – less burdensome and more suited to the sector’s current challenges. As the regulator reflects on their approach over the summer with a new chair at the helm who’s closer to the provider perspective and more distanced from the independent review, perhaps this is one which they will lean towards.

    Source link

  • Moving beyond the quality wars

    Moving beyond the quality wars

    A decade since his passing, David Watson’s work remains a touchpoint of UK higher education analysis.

    This reflects the depth and acuity of his analysis, but also his ability as a phrasemaker.

    One of his phrases that has stood the test of time is the “quality wars” – his label for the convulsions in UK higher education in the 1990s and early 2000s over the assurance of academic quality and standards.

    Watson coined this phrase in 2006, shortly after the 2001 settlement that brought the quality wars to an end. A peace that lasted, with a few small border skirmishes, until HEFCE’s launch of its review of quality assessment in 2015.

    War never changes

    I wasn’t there, but someone who was has described to me a meeting at that time involving heads of university administration and HEFCE’s chief executive. As told to me, at one point a registrar of a large and successful university effectively called out HEFCE’s moves on quality assessment urging HEFCE not to reopen the quality wars. I’ve no idea if the phrase Pandora’s box was used, but it would fit the tenor of the exchange as it was relayed to me.

    Of course this warning was ignored. And of course (as is usually the case) the registrar was right. The peace was broken, and the quality wars returned to England.

    The staging posts of the revived conflict are clear.

    HEFCE’s Revised operating model for quality assessment was introduced in 2016. OfS was establishment two years later, leading to the B conditions mark I; followed later the same year by a wholesale re-write of the UK quality code that was reportedly largely prompted and/or driven by OfS. Only for OfS to decide by 2020 that it wasn’t content with this; repudiation of the UK quality code; and OfS implementing from 2022 the B conditions mark II (new, improved; well maybe not the latter, but definitely longer).

    And a second front in the quality wars opened up in 2016, with the birth of the Teaching Excellence Framework (TEF). Not quite quality assessment in the by then traditional UK sense, but still driven by a desire to sort the sheep from the goats – identifying both the pinnacles of excellence and depths of… well, that was never entirely clear. And as with quality assessment, TEF was a very moveable feast.

    There were three iterations of Old TEF between 2016 and 2018. The repeated insistence that subject level TEF was a done deal, leading to huge amounts of time and effort on preparations in universities between 2017 and early 2020 only for subject-level TEF to be scrapped in 2021. At which point New TEF emerged from ashes, embraced by the sector with an enthusiasm that was perhaps to be expected – particularly after the ravages of the Covid pandemic.

    And through New TEF the two fronts allegedly became a united force. To quote OfS’s regulatory advice , the B conditions and New TEF formed part of an “overall approach” where “conditions of registration are designed to ensure a minimum level” and OfS sought “to incentivise providers to pursue excellence in their own chosen way … in a number of ways, including through the TEF”.

    Turn and face the strange

    So in less than a decade English higher education experienced: three iterations of quality assessment; three versions of TEF (one ultimately not implemented, but still hugely disruptive to the sector); and a rationalisation of the links between the two that required a lot of imagination, and a leap into faith, to accept the claims being made.

    Pandora’s box indeed.

    No wonder that David Behan’s independent review of OfS recommended “that the OfS’s quality assessment methodologies and activity be brought together to form a more integrated assessment of quality.” Last week we had the first indications from OfS of how it will address this recommendation, and there are two obvious questions: can we see a new truce emerging in the quality wars; and given where we look as though we may end up on this issue, was this round of the quality wars worth fighting?

    Any assessment of where we are following the last decade of repeated and rapid change has to recognise that there have been some gains. The outcomes data used in TEF, particularly the approach to benchmarking at institutional and subject levels, is and always has been incredibly interesting and, if used wisely, useful data. The construction of a national assessment process leading to crude overall judgments just didn’t constitute wise use of the data.

    And while many in the sector continue to express concern at the way such data was subsequently brought into the approach to national quality assessment by OfS, this has addressed the most significant lacuna of the pre-2016 approach to quality assurance. The ability to use this to identify specific areas and issues of potential concern for further, targeted investigation also addresses a problematic gap in previous approaches that were almost entirely focused on cyclical review of entire institutions.

    It’s difficult though to conclude that these advances, important elements of which it appears will be maintained in the new quality assessment approach being developed by OfS, were worth the costs of the turbulence of the last 10 years.

    Integration

    What appears to be emerging from OfS’s development of a new integrated approach to quality assessment essentially feels like a move back towards central elements of the pre-2016 system, with regular cyclical reviews of all providers (with our without visits to be decided) against a single reference point (albeit the B conditions rather than UK Quality Code). Of course it’s implicit rather than explicit, but it feels like an acknowledgment that the baby was thrown out with the bathwater in 2016.

    There are of course multiple reasons for this, but a crucial one has been the march away from the concept of co-regulation between universities and higher education providers. This was a conscious and deliberate decision, and one that has always been slightly mystifying. As a sector we recognise and promote the concept of co-creation of academic provision by staff and students, while being able to maintain robust assessment of the latter by the former. The same can and should be true of providers and regulators in relation to quality assurance and assessment, and last week’s OfS blog gives some hope that OfS is belatedly moving in this direction.

    It’s essential that they do.

    Another of David Watson’s memorable phrases was “controlled reputational range”: the way in which the standing of UK higher education was maintained by a combination of internal and external approaches. It is increasingly clear from recent provider failures and the instances of unacceptable practices in relation to some franchised provision that this controlled reputational range is increasingly at risk. And while this is down to developments and events in England, it jeopardises this reputation for universities across the UK.

    A large part of the responsibility for this must sit with OfS and its approach to date to regulating academic quality and standards. There have also been significant failings on the part of awarding bodies, both universities and private providers. The answer must therefore lie in partnership working between regulators and universities, moving closer to a co-regulatory approach based on a final critical element of UK higher education identified by Watson – its “collaborative gene”.

    OfS’s blog post on its developing approach to quality assessments holds out hope of moves in this direction. And if this is followed through, perhaps we’re on the verge of a new settlement in the quality wars.

    Source link

  • The world is sorting out the quality of transnational education, but where is England?

    The world is sorting out the quality of transnational education, but where is England?

    If you believe – as many do – that English higher education is among the best in the world, it can come as an unwelcome surprise to learn that in many ways it is not.

    As a nation that likes to promote the idea that our universities are globally excellent, it feels very odd to realise that the rest of the world is doing things rather better when it comes to quality assurance.

    And what’s particularly alarming about this is that the new state of the art is based on the systems and processes set up in England around two decades ago.

    Further afield

    The main bone of contention between OfS and the rest of the quality assurance world – and the reason why England is coloured in yellow rather than green on the infamous EQAR map – and the reason why QAA had to demit from England’s statutory Designated Quality Body role – is that the European Standards and Guidance (ESG) require a cyclical review of institutional quality processes and involve the opinions of students, while OfS wants things to be more vibes risk-based and feels quality assurance is far too important to get actual students involved.

    Harsh? Perhaps. In the design of its regulatory framework the OfS was aiming to reduce burden by focusing mainly on where there were clear issues with quality – with the enhancement end handled by the TEF and the student aspect handled by actual data on how they get on academically (the B3 measures of continuation, completion, and progression) and more generally (the National Student Survey). It has even been argued (unsuccessfully) in the past that as TEF is kind of cyclical if you squint a bit, and it does sort of involve students, that England is in fact ESG compliant.

    It’s not like OfS were deliberately setting out to ignore international norms, it was more that it was trying to address English HE’s historic dislike for lengthy external reviews of quality as it established a radically new system of regulation – and cyclical reviews with detailed requirements on student involvement were getting in the way of this. Obviously this was completely successful, as now nobody complains about regulatory burden and there are no concerns about the quality of education in any part of English higher education among students or other stakeholders.

    Those ESG international standards were first published in 2005,with the (most recent) 2015 revision adopted by ministers from 47 countries (including the UK). There is a revision underway led by the E4 group: the European Association for Quality Assurance in Higher Education (ENQA), ESU, EUA and EURASHE – fascinatingly, the directors of three out of four of these organisations are British. The ESG are the agreed official standards for higher education quality assurance within the Bologna process (remember that?) but are also influential further afield (as a reference point for similar standards in Africa, South East Asia, and Latin America. The pandemic knocked the process off kilter a bit, but a new ESG is coming in 2027, with a final text likely to be available in 2026.

    A lot of the work has already been done, not least via the ENQA-led and EU-funded QA-FIT project. The final report, from 2024, set out key considerations for a new ESG – it’s very much going to be a minor review of the standards themselves, but there is some interesting thinking about flexibility in quality assurance methodologies.

    The UK is not England

    International standards are reflected more clearly in other parts of the UK.

    Britain’s newest higher education regulator, Medr, continues to base higher education quality assurance on independent cyclical reviews involving peer review and student input, which reads across to widely accepted international standards (such as the ESG). Every registered provider will be assessed at least every five years, and new entrants will be assessed on entry. This sits alongside a parallel focus on teaching enhancement and a focus on student needs and student outcomes – plus a programme of triennial visits and annual returns to examine the state of provider governance.

    Over at the Scottish Funding Council the Tertiary Quality Enhancement Framework (TQEF) builds on the success of the enhancement themes that have underpinned Scottish higher education quality for the past 20 years. The TQEF again involves ESG-compliant cyclical independent review alongside annual quality assurance engagements with the regulator and an intelligent use of data. As in Wales, there are links across to the assessment of the quality of governance – but what sets TQEF apart is the continued focus on enhancement, looking not just for evidence of quality but evidence of a culture of improvement.

    Teaching quality and governance are also currently assessed by cyclical engagements in Northern Ireland. The (primarily desk-based) Annual Performance Review draws on existing data and peer review, alongside a governance return and engagement throughout the year, to give a single rating to each provider in the system. Where there are serious concerns an independent investigation (including a visit) is put in place. A consultation process to develop a new quality model for Northern Ireland is underway – the current approach simply continues the 2016 HEFCE approach (which was, ironically, originally hoped to cover England, Wales, and Northern Ireland while aligning to ESG).

    The case of TNE

    You could see this as a dull, doctrinal, dispute of the sort that higher education is riven with – you could, indeed, respond in the traditional way that English universities do in these kinds of discussions by putting your fingers in your ears and repeating the word “autonomy” in a silly voice. But the ESG is a big deal: it is near essential to demonstrate compliance if you want to get stuck into any transnational education or set up an international academic partnership.

    As more parts of the world are now demanding access to high quality higher education, it seems fair to assume that much of this will be delivered – in the country or online – by providers elsewhere. In England, we still have no meaningful way of assuring the quality of transnational education (something that we appear to be among the best in the world at expanding)? Indeed, we can’t even collect individualised student data about TNE.

    Almost by definition, regulation of TNE requires international cooperation and international knowledge – the quasi-colonial idea that if the originating university is in good standing then everything it does overseas is going to be fine is simply not an option. National systems of quality need to be receptive to collaboration and co-regulation as more and more cross-border provision is developed, in terms of rigor, comparability (to avoid unnecessary burden) and flexibility to meet local needs and concerns.

    Of course, concerns about the quality of transnational education are not unique to England. ENQA has been discussing the issue as a part of conversations around ESG – and there are plans to develop an international framework, with a specific project to develop this already underway (which involves our very own QAA). Beyond Europe, the International Network for Quality Assurance Agencies in Higher Education (INQAAHE – readers may recall that at great expense OfS is an associated member, and that the current chair is none other than the QAA’s Vicki Stott) works in partnership with UNESCO on cross-border provision.

    And it will be well worth keeping an eye on the forthcoming UNESCO second intergovernmental conference of state parties to the Global Convention on Higher Education later this month in Paris, which looks set to adopt provisions and guidance on TNE with a mind to developing a draft subsidiary text for adoptions. The UK government ratified the original convention, which at heart deals with the global recognition of qualifications, in 2022. That seems to be the limit of UK involvement – there’s been no signs that the UK government will even attend this meeting.

    TNE, of course, is just one example. There’s ongoing work about credit transfer, microcredentials, online learning, and all the other stuff that is on the English to-do pile. They’re all global problems and they will all need global (or at the very least, cross system) solutions.

    Plucky little England going it alone

    The mood music at OfS – as per some questions to Susan Lapworth at a recent conference – is that the quality regime is “nicely up and running”, with the various arms of activity (threshold assessment for degree awarding powers, registration, and university titles; the B conditions and associated investigations; and the Teaching Excellence Framework) finally and smoothly “coming together”.

    A blog post earlier this month from Head of Student Outcomes Graeme Rosenberg outlined more general thinking about bringing these strands into better alignment, while taking the opportunity to fix a few glaring issues (yes, our system of quality assurance probably should cover taught postgraduate provision – yes, we might need to think about actually visiting providers a bit more as the B3 investigations have demonstrated). On the inclusion of transnational education within this system, the regulator has “heard reservations” – which does not sound like the issue will be top of the list of priorities.

    To be clear, any movement at all on quality assurance is encouraging – the Industry and Regulators Committee report was scathing on the then-current state of affairs, and even though the Behan review solidified the sense that OfS would do this work itself it was not at all happy with the current fragmentary, poorly understood, and internationally isolated system.

    But this still keeps England a long way off the international pace. The ESG standards and the TNE guidance UNESCO eventually adopts won’t be perfect, but they will be the state of the art. And England – despite historic strengths – doesn’t even really have a seat at the table.

    Source link

  • Subject-level insights on graduate activity

    Subject-level insights on graduate activity

    We know a lot about what graduates earn.

    Earnings data—especially at subject level—has become key to debates about the value of higher education.

    But we know far less about how graduates themselves experience their early careers. Until now, subject-level data on graduate job quality—how meaningful their work is, how well it aligns with their goals, and whether it uses their university-acquired skills—has been missing from the policy debate.

    My new study (co-authored with Fiona Christie and Tracy Scurry and published in Studies in Higher Education) aims to fill this gap. Drawing on responses from the 2018-19 graduation cohort in the national Graduate Outcomes survey, we provide the first nationally representative, subject-level analysis of these subjective graduate outcomes.

    What we find has important implications for how we define successful outcomes from higher education—and how we support students in making informed choices about what subject to study.

    What graduates tell us

    The Graduate Outcomes survey includes a set of questions—introduced by HESA in 2017—designed to capture core dimensions of graduate job quality. Respondents are asked (around 15 months after graduation) whether they:

    • find their work meaningful
    • feel it aligns with their future plans
    • believe they are using the skills acquired at university

    These indicators were developed in part to address the over-reliance on income as a measure of graduate success. They reflect a growing international awareness that economic outcomes alone offer a limited picture of the value of education—in line with the OECD’s Beyond GDP agenda, the ILO’s emphasis on decent work, and the UK’s Taylor Review focus on job quality.

    Subject-level insights

    Our analysis shows that most UK graduates report positive early-career experiences, regardless of subject. Across the sample, 86 per cent said their work felt meaningful, 78 per cent felt on track with their careers, and 66 per cent reported using their degree-level skills.

    These patterns generally hold across disciplines, though clear differences emerge. The chart below shows the raw, unadjusted proportion of graduates who report positive outcomes. Graduates from vocational fields—such as medicine, subjects allied to medicine, veterinary science, and education—tend to report particularly strong outcomes. For instance, medicine and dentistry graduates were 12 percentage points more likely than average to say their work was meaningful, and over 30 points more likely to report using the skills they acquired at university.

    However, the results also challenge the narrative that generalist or academic degrees are inherently low value. As you can see, most subject areas—including history, languages, and the creative arts, often targeted in these debates—show strong subjective outcomes across the three dimensions. Only one field, history and philosophy, fell slightly below the 50 per cent threshold on the skills utilisation measure. But even here, graduates still reported relatively high levels of meaningful work and career alignment.

    Once we adjusted for background characteristics—such as social class, gender, prior attainment, and institutional differences—many of the remaining gaps between vocational and generalist subjects narrowed and were no longer statistically significant.

    This chart shows the raw proportion of 2018-19 graduates who agree or strongly agree that their current work is meaningful, on track and using skills, by field of study (N = 67,722)

    Employment in a highly skilled occupation—used by the Office for Students (OfS) as a key regulatory benchmark—was not a reliable predictor of positive outcomes. This finding aligns with previous HESA research and raises important questions about the appropriateness of using occupational classification as a proxy for graduate success at the subject level.

    Rethinking what we measure and value

    These insights arrive at a time when the OfS is placing greater emphasis on regulating equality of opportunity and ensuring the provision of “full, frank, and fair information” to students. If students are to make informed choices, they need access to subject-level data that reflects more than salary, occupational status, or postgraduate progression. Our findings suggest that subjective outcomes—how graduates feel about their work—should be part of that conversation.

    For policymakers, our findings highlight the risks of relying on blunt outcome metrics—particularly earnings and occupational classifications—as indicators of course value. Our data show that graduates from a wide range of subjects—including those often labelled as “low value”—frequently go on to report meaningful work shortly after graduation that aligns with their future plans and makes use of the skills they developed at university.

    And while job quality matters, universities should not be held solely accountable for outcomes shaped by employers and labour market structures. Metrics and league tables that tie institutional performance too closely to job quality risk misrepresenting what higher education can influence. A more productive step would be to expand the Graduate Outcomes survey to include a wider range of job quality indicators—such as autonomy, flexibility, and progression—offering a fuller picture of early career graduate success.

    A richer understanding

    Our work offers the first nationally representative, subject-level insight into how UK graduates evaluate job quality in the early stages of their careers. In doing so, it adds a missing piece to the value debate—one grounded not just in earnings or employment status, but in graduates’ own sense of meaning, purpose, and skill use.

    If we are serious about understanding what graduates take from their university experience, it’s time to move beyond salary alone—and to listen more carefully to what graduates themselves are telling us.

    DK notes: Though the analysis that Brophy et al have done (employing listwise deletion, examining UK domiciled first degree graduates only) enhances our understanding of undergraduate progression and goes beyond what is publicly available, I couldn’t resist plotting the HESA public data in a similar way, as it may be of interest to readers:

    [Full screen]

    Source link

  • Risk-based quality regulation – drivers and dynamics in Australian higher education

    Risk-based quality regulation – drivers and dynamics in Australian higher education

    by Joseph David Blacklock, Jeanette Baird and Bjørn Stensaker

    Risk-based’ models for higher education quality regulation have been increasingly popular in higher education globally. At the same time there is limited knowledge of how risk-based regulation can be implemented effectively.

    Australia’s Tertiary Education Quality and Standards Agency (TEQSA) started to implement risk-based regulation in 2011, aiming at an approach balancing regulatory necessity, risk and proportionate regulation. Our recent published study analyses TEQSA’s evolution between 2011 and 2024 to contribute to an emerging body of research on the practice of risk-based regulation in higher education.

    The challenges of risk-based regulation

    Risk-based approaches are seen as a way to create more effective and efficient regulation, targeting resources to the areas or institutions of greatest risk. However, it is widely acknowledged that sector-specificities, political economy and social context exert a significant influence on the practice of risk-based regulation (Black and Baldwin, 2010). Choices made by the regulator also affect its stakeholders and its perceived effectiveness – consider, for example, whose ideas about risk are privileged. Balancing the expectations of these stakeholders, along with their federal mandate, has required much in the way of compromise.

    The evolution of TEQSA’s approaches

    Our study uses a conceptual framework suggested by Hood et al (2001) for comparative analyses of regimes of risk regulation that charts aspects respectively of context and content. With this as a starting point we end up with two theoretical constructs of ‘hyper-regulation’ and ‘dynamic regulation’ as a way to analyse the development of TEQSA over time. These opposing concepts of regulatory approach represent both theoretical and empirical executions of the risk-based model within higher education.

    From extensive document analysis, independent third-party analysis, and Delphi interviews, we identify three phases to TEQSA’s approach:

    • 2011-2013, marked by practices similar to ‘hyper-regulation’, including suspicion of institutions, burdensome requests for information and a perception that there was little ‘risk-based’ discrimination in use
    • 2014-2018, marked by the use of more indicators of ‘dynamic regulation’, including reduced evidence requirements for low-risk providers, sensitivity to the motivational postures of providers (Braithwaite et al. 1994), and more provider self-assurance
    • 2019-2024, marked by a broader approach to the identification of risks, greater attention to systemic risks, and more visible engagement with Federal Government policy, as well as the disruption of the pandemic.

    Across these three periods, we map a series of contextual and content factors to chart those that have remained more constant and those that have varied more widely over time.

    Of course, we do not suggest that TEQSA’s actions fit precisely into these timeframes, nor do we suggest that its actions have been guided by a wholly consistent regulatory philosophy in each phase. After the early and very visible adjustment of TEQSA’s approach, there has been an ongoing series of smaller changes, influenced also by the available resources, the views of successive TEQSA commissioners and the wider higher education landscape as a whole.

    Lessons learned

    Our analysis, building on ideas and perspectives from Hood, Rothstein and Baldwin offers a comparatively simple yet informative taxonomy for future empirical research.

    TEQSA’s start-up phase, in which a hyper-regulatory approach was used, can be linked to a contextual need of the Federal Government at the time to support Australia’s international education industry, leading to the rather dominant judicial framing of its role. However, TEQSA’s initial regulatory stance failed to take account of the largely compliant regulatory posture of the universities that enrol around 90% of higher education students in Australia, and of the strength of this interest group. The new agency was understandably nervous about Government perceptions of its performance, however, a broader initial charting of stakeholder risk perspectives could have provided better guardrails. Similarly, a wider questioning of the sources of risk in TEQSA’s first and second phases could have highlighted more systemic risks.

    A further lesson for new risk-based regulators is to ensure that the regulator itself has a strong understanding of risks in the sector, to guide its analyses, and can readily obtain the data to generate robust risk assessments.

    Our study illustrates that risk-based regulation in practice is as negotiable as any other regulatory instrument. The ebb and flow of TEQSA’s engagement with the Federal Government and other stakeholders provides the context. As predicted by various authors, constant vigilance and regular recalibration are needed by the regulator as the external risk landscape changes and the wider interests of government and stakeholders dictate. The extent to which there is political tolerance for any ‘failure’ of a risk-based regulator is often unstated and always variable.

    Joseph David Blacklock is a graduate of the University of Oslo’s Master’s of Higher Education degree, with a special interest in risk-based regulation and government instruments for managing quality within higher education.

    Jeanette Baird consults on tertiary education quality assurance and strategy in Australia and internationally. She is Adjunct Professor of Higher Education at Divine Word University in Papua New Guinea and an Honorary Senior Fellow of the Centre for the Study of Higher Education at the University of Melbourne.

    Bjørn Stensaker is a professor of higher education at University of Oslo, specializing in studies of policy, reform and change in higher education. He has published widely on these issues in a range of academic journals and other outlets.

    This blog is based on our article in Policy Reviews in Higher Education (online 29 April 2025):

    Blacklock, JD, Baird, J & Stensaker, B (2025) ‘Evolutionary stages in risk-based quality regulation in Australian higher education 2011–2024’ Policy Reviews in Higher Education, 1–23.

    Author: SRHE News Blog

    An international learned society, concerned with supporting research and researchers into Higher Education

    Source link