Category: Quality

  • So you’ve been accused of harbouring “Mickey Mouse” courses at your institution…now what?

    So you’ve been accused of harbouring “Mickey Mouse” courses at your institution…now what?

    Margaret Hodge’s 2003 speech to the Institute of Public Policy Research on “achieving excellence and equality in post-16 education” tells us that even under New Labour policy announcements on higher education were “long-awaited.”

    The speech illustrates how then, as now, the government was grappling with questions of growing and massifying participation while retaining the sector’s global competitiveness; promoting specialisation and collaboration; boosting quality and civic engagement.

    Hodge had taken to the stage to explain the government’s plans for driving up HE participation to at least 50 per cent of young people and signal the themes of its forthcoming higher education strategy – but warned that doing so via “stacking up numbers on Mickey Mouse courses” was “not acceptable.”

    Hodge’s usage shows that she – or her speechwriter – assumed that the meaning of the term “Mickey Mouse course” was widely understood. But as DK has explored elsewhere on the site, Mickey Mouse’s meanings when applied to higher education have shifted and evolved according to cultural context.

    What has remained consistent, however, is the assumption that there is a chunk of HE provision that all right-thinking people can see obviously shouldn’t “count” as HE – because it’s unserious, or too popular, or on a topic that’s not traditionally been seen as academic or, in the recent analysis from the Taxpayer’s Alliance, ideologically suspect.

    Let’s imagine you’re a university press officer looking at a message on your phone or a note in your email inbox requesting that you explain succinctly by 3pm today why it’s entirely sane and reasonable to offer courses in e-gaming, fashion, filmmaking, tourism, mental health, gender identity, outdoor learning, climate change, sports or any one of a long tail of stuff the proverbial man on the Clapham omnibus wouldn’t see the point of. What’s your strategy?

    Make it go away

    Back in 2003, the BBC reports that Margaret Hodge swiftly felt the sharp end of university leaders’ tongues, who apparently said her remarks were “offensive” and “ill-informed.” It’s hard to imagine a government minister getting such short shrift from the sector today – while some of the issues might look similar, the political landscape has changed enormously.

    Even so, Option One, the dismissive approach, is seductive. There are several flavours of dismissive available: you could point out that higher education institutions hold their own degree awarding powers, are responsible for their own quality and academic standards, and curriculum, and that ergo, any course provided by a legitimate HE provider is de facto itself legitimate. Or you could question the motives of the questioner and suggest that the framing is a political act designed to discredit universities and higher education by those who wish the sector ill. The moral high ground feels pretty good, and has the advantage of refusing to concede the principle of the question, but it doesn’t necessarily contribute to public understanding of contemporary higher education.

    A whole bunch of institutions approached for comment simply did not respond – possibly because they were asked to do so during the Christmas break but it may also have been because they refused to dignify the question with a response, an approach that might be characterised as Option One (b).

    De-escalate

    The institutions who chose to respond to the Telegraph when confronted with the evidence amassed by the Taxpayers’ Alliance seem to have in the main gone for Option Two: explain and clarify – and try to wedge in a plug for the institution.

    Thus the University of Cumbria’s spokesperson explains that its MA in outdoor experiential learning is “designed for those passionate about transforming education, inspiring sustainability, and reshaping how we engage with experience in learning” – and notes that the university is in the top ten for graduate destinations. The University of Nottingham’s spokesperson points out that its workplace health and wellbeing course is postgraduate level, and therefore not taxpayer funded – and says the course encourages “a rigorous scientific approach that fulfils and exceeds legal requirements to support organisational performance and effectiveness and enhance worker productivity.”

    There are absolutely merits to this approach – essentially it smothers the reputational fire with approved corporate narratives. When the Telegraph comes to call during the Christmas break you probably don’t lob your scanty communications resources at anything other than de-escalation. This, arguably, is not the moment to start a media scrap and find yourself inadvertently the “face” of the Mickey Mouse debate. Experience shows that that sort of thing can haunt your institution for ages and goodness knows everyone’s got enough to worry about without that.

    Engage in the debate

    But we should give at least a decent bit of consideration to Option 3: full-throated defence offered in language that people recognise as meaningful. That means more or less grudgingly accepting the premise that it’s hard for everyone to see why some lucky, lucky students get to study something as fun and creative and glamorous as fashion or “the outdoors” or identity or filmmaking. It involves painting a succinct picture of what these subjects achieve for students, and industries, not in big picture stats but in human terms, in stories.

    I have two children, one in a state primary which, like many, have invested in a forest school. When my son was in reception he got to learn outdoors once a week; since then it’s been once a term at most. I can’t believe I’m the only parent of an active kid that is troubled by how little time the system affords kids to learn in and about nature.

    Or, not to make this all about my kids, but being a parent computer games are a pretty big feature of my life. I can see how gaming can offer opportunities for my kids to problem solve and develop tactical and situational awareness, but I want to be sure they are safe when they do that – thanks, e-games courses.

    Or, I’m a middle aged woman who sometimes struggles to find clothes that feel right for my professional and personal identity. Or I’m someone who wants to understand why the gender identity “debate” has become so toxic and what my orientation to it should be. Or I’m worried that my efforts to put my rubbish in the right bins isn’t going to deliver on that net zero target and is that even a useful target anyway?

    OK, my preoccupations are very obviously filtered through the lens of middle-class London liberal. I’m not suggesting I’m a typical Telegraph reader – but I’m using my own sense of what the existence of these courses might mean for me to illustrate the point that lots of them touch people’s everyday concerns in ways that could be surfaced more powerfully.

    The “Mickey Mouse” accusation runs deeper than notions of social irrelevance, however – inherent in the proposition that something is “Mickey Mouse” is calling into question whether these are subjects and courses that deserve to be part of the thing we call higher education. And that’s a much harder challenge to defend because doing so may feel like to do so requires a referral back to expertise, or knowledge that is inaccessible to the common reader and therefore will struggle to “cut through” in any media response.

    Outside the realm of quality and standards regulation the question of why something is a legitimate source of higher education study speaks to the range of conceptions of higher education value. Is it worthwhile because there is labour market demand for it, because it is sufficiently complex to constitute a structured body of knowledge that merits deep intellectual engagement, because the resourcing required to study it is only accessible in higher education contexts, because of its wider social relevance or some thrilling combination of all these? And how on earth do you capture all that in a media quote?

    I’ve been puzzling over this all week, and have come to the conclusion that there can’t be a silver bullet on how to defend the HE-ness of any given course, especially when the framing of the scepticism is so multi-faceted. One person’s useful market labour skills is another person’s intellectual lack of rigour. There’s no easy “win” available for this argument – but there might be a position to take that feels authentic and worthwhile that is rooted in the course’s own conception of itself and its meaning and value set within the wider institutional framework of mission and purpose.

    Latent to salient

    It’s not, I think, that institutions and their staff have no sense of why their courses are meaningful as higher education, but that this knowledge is so deeply embedded in the structures and cultures of the institution as to be almost entirely latent and unarticulated. Yet to be able to capture any of this pithily and in the teeth of a sceptical line of questioning that knowledge needs to be explicit and intentionally surfaced.

    Any institution will have a stock of anecdotes, insight and ideas about why their courses matter, in human terms. This knowledge isn’t always held in comms teams, who are not always linked closely with the nuts and bolts of the academic endeavour . It’s not an easy ask, but I’d argue that it’s worth comms teams spending some real time in some of the university’s less “mainstream” course offerings, putting forward the sorts of the challenges around value that a hostile media outlet or think tank might present and understanding the nuances of the answers before working them into something media-friendly. Don’t just talk to the programme leaders, ask to audit the classes. Direct experience trumps course marketing brochure every time.

    Because when it comes to that unexpected phone call or email asking for the justification for these woke, un-rigorous, pointless degrees, and deciding how best to respond, it’s great to at least have the option to explain why these courses are not merely legitimate higher education provision, they are essential for the furtherance of human flourishing.

    Source link

  • Identifying “mickey mouse” courses | Wonkhe

    Identifying “mickey mouse” courses | Wonkhe

    St Valentine’s Day, 1966. Salem, Oregon.

    State legislator Morris Crothers (Salem-R), a qualified doctor, is unhappy with a Bachelor’s degree in Medical Technology offered by the Oregon Technical Institute (OTI, formerly Oregon Polytechnic).

    The Capital Journal reported his words:

    a mickey mouse degree that would not allow those earning it to practice in most Oregon laboratories.

    His issue isn’t with the content of the degree, but with his perception that it does not qualify a graduate to perform certain licensed tests (including the pre-marriage test for syphilis) in the state of Oregon. I say perception because it turned out he was wrong and the course was accredited – 131 graduates were already employed by the state. His real issue was that OTI wasn’t a proper four-year college, and had low entry requirements.

    OTI chancellor RE Lieuallen responded (as recorded in The Oregonian): “Here we get into the question of the liberal arts background … some people would say that a job-oriented programme is better”.

    Crothers withdrew his accusation, claiming “the news media quoted me a little out of context”.

    This is the earliest published newspaper use of the pejorative term “mickey mouse degree”. And it betrayed a lack of understanding, and a certain level of snobbery, rather than academic failings.

    From Morris to Maurice

    In the academic literature a letter to journal American Speech from Michigan State University’s Maurice Crane slightly predates Salem’s tawdry tale: in 1958 (volume 33, number 3) his letter (“Vox Bop”) offers a partial lexicon of historic midwestern jazz slang, in which he observes:

    Incidentally, a mickey or Mickey Mouse band is not merely a ‘pop tune’ band … but the kind of pop band that sounds as if it is playing background for an animated cartoon. […] This term, which has been around almost as long as Mickey Mouse himself, has also come into common parlance in another sense at Michigan State, where a ‘Mickey Mouse course’ means a snap course, or what Princeton undergraduates in my day called a gut course

    It’s unhelpful to have slang defined by reference to earlier slang, but Collins dictionary tells us a snap course was “an academic course that can be passed with a minimum of effort”.

    For things dismissed as “hobby courses” – usually arts, crafts, and leisure pursuits – there is a suspicion that such provision lacks academic rigour. The economic value argument is less pronounced here – the sheer size of the Disney industry is just one example of just how much money and time human beings devote to hobbies and interests.

    The jazzman’s derivation is interesting in that jazz is itself based on “pop tunes” – the distinction Crane draws is around the manner of playing rather than the repertoire itself. Whether you play them with a “hip” jazz inflection or a “square” pop sensibility these are difficult tunes that are challenging to play and perform well.

    Morris dancing

    The first UK press sighting of the term was in 1972 – the Nottingham Guardian Journal published a letter from an irate Loughborough resident concerning governance problems at the Institute of Race Relations (a “so-earnest group of sociologists, permissives, and mickey mouse degree holders all speaking at the same time.”)

    Here the mouse is used to infer suspicions about the political project underpinning a degree course – in the same way that the likes of the Taxpayers Alliance is able to classify courses on topics as complex and crucial as climate science and mental health as being “mickey mouse.”

    Although Margaret Hodge famously used the term in a speech to the Institute for Public Policy Research on 13 January 2003 she did not coin the phrase. Her perhaps ill-chosen words masked the actual intent of her speech – she was attempting to encourage the growth of two-year foundation degree provision in subjects that met the needs of local industry. This is a diametrically opposite position to the one taken by Morris Crothers – which serves to illustrate why the idea has become so useful. A “mickey mouse degree” is simply a term for higher education provision that the speaker doesn’t like.

    Many of the early media examples on this side of the Atlantic are actually playful subversions of the trope (University of Exeter drama lecturer Robin Allan received “Britain’s first PhD on Walt Disney” in 1994 – the Torquay Herald Express tells us that Mickey himself turned up on graduation day!) suggesting that the term had currency long before the term was introduced to the parliamentary record. That wasn’t Margaret Hodge either – Liberal Democrat MP Simon Hughes used the phrase to defend the University of Westminster from that attack in the media, in a debate on the private City of Westminster Bill in June 1995.

    You’re so fine you blow my mind

    So to describe a course as “mickey mouse” is to make a judgement that it is either academically frivolous, politically suspect, or economically worthless: and – importantly – popular. A drawing of an anthropomorphic rodent is worthless, while Mickey Mouse himself is worth billions of dollars to the Disney corporation: to use the term is to ignore a widely perceived value in favour of your own judgement.

    For this reason, a list of “mickey mouse courses” – such as the one published by the Telegraph on 3 January is the purest expression of the long running “low quality courses” debate. It floats free of metrics and data simply to reinforce prejudices.

    The 787 courses identified by a researcher (Callum McGoldrick) at the Taxpayers’ Alliance were selected based on his own judgement and assigned to one of five categories:

    • Fashion (including textiles and jewellery)
    • Games (by which I mean computer games industry related courses)
    • Media (film, photography, and – with apologies to Maurice Crane – both jazz and popular music)
    • Woke (inevitably – mostly things to do with ethnicity, gender, mental wellbeing, and sustainability)
    • Misc (which includes specifically leisure-linked vocational courses, and more general arts and crafts provision)

    There’s no distinction drawn between undergraduate, postgraduate, and non-credit-bearing provision, and (as the article illustrates) not all of the courses described are currently recruiting or funded via student loans. Courses were drawn from a series of freedom of information requests – so the list, as well as being arbitrary, is not exhaustive. It covers just 51 providers.

    It feels like a horribly labour intensive way of getting an article into the Telegraph, and as a service to contrarian think-tanks everywhere I’ve built a little tool to optimise the process. Just type a word that makes you angry into the box on the left and you get both a count and a complete list of currently recruiting undergraduate courses with that word in their title to give you that special tingly feeling.

    [Full screen]

    The bigger question

    In a 2003 article for the Guardian, Emma Brockes examined the “mickey mouse” course industry in the light of Margaret Hodge’s comments noting that “every generation has its Mickey Mouse degrees – arts subjects were mocked in the 60s and 70s, sociology in the 80s and gender studies in the 1990s.” She noted:

    “There are degrees made ludicrous by virtue of their specificity (a BA (Hons) in air-conditioning). There are degrees ridiculed for their non-specificity (citizenship studies, which, to its detractors, is so broad that it might as well be called “shit that happens in the world” studies). There are the apparent oxymorons – turfgrass science, amenity horticulture, surf and beach management and the BSc from Luton University in decision-making, which begs the cheap but irresistible observation, how did those on the course manage to make the decision to take it in the first place?

    She hangs her piece on an interview with the news editor of the Coalville and Ashby Times – one Paul Marston, a recent media studies graduate from De Montfort University. Though he does mount a defence (which Brockes rather snootily describes as half-hearted) of the relevance and interest of his degree, he laments that:

    I’m finding it difficult to move on in my career now, and I do put that down partly to my degree. It was very general, very broad, good for keeping my options open, but it doesn’t seem to have prepared me for anything much else.

    The early 00s were perhaps not the most auspicious point to begin a career in local journalism, but linkedin does confirm that Marston has had a successful career in media and communications – currently leading internal communications for defence company MBDA. It’s not clear his media studies degree directly prepared him for that role, but it feels reasonable to suggest it may have had an impact, in the same way that niche, broad, and oxymoronic courses help graduates into careers all the time.

    The “mickey mouse” accusations seldom have much to do with actual concerns about course quality. You’ll look in vain for any sign of the kind of courses that OfS and DfE are currently concerned about (franchise delivery, business studies), and the only time you see a link to metrics is with graduate salaries (which, I would argue, says more about low pay in certain industries than any failings of the courses themselves).

    It is easy and unsatisfying to critique the methodology, because (as with everything like this) the methodology isn’t the point. The prejudice, and the way people respond to it, is a much bigger issue.

    On the recent Taxpayers’ Alliance efforts, researcher Callum McGoldrick told the Telegraph:

    Taxpayers are sick of seeing their hard-earned cash subsidise rip-off degrees that offer little to no return on investment. These Mickey Mouse subjects are essentially a state-sponsored vanity project where universities fill their coffers while the public picks up the tab for loans which will never come close to being fully repaid. We need to stop funding hobby courses and start prioritising rigorous subjects that actually boost the economy and deliver value for money

    As much as the current fashion for skills planning (at levels from the local to the global) and vocational training speaks to the anxieties of a government and nation increasingly unsure of itself in a radically changing world, there’s also a sense in which it is a kind of play-acting. Sometimes we don’t think the skills we need are skills at all: while manufacturing in the UK is healthier than popularly imagined we obsess over ensuring we have the skills we need to to do that, and there is far less attention devoted to the myriad professions that keep our theatres and venues delighting audiences. We clearly need both, for our economy to thrive.

    In 1960s Salem, Morris Crothers was concerned about prestige and employer value – but his perspective was at odds with that of actual employers. Mickey Mouse, as a cipher for the immense value embedded in things we dismiss or fail to understand, betrayed his anxiety that an old order was disrupted and a new one was being born.

    Whatever the next ten years look like, in the sector or the wider economy, our starting point has to be that what will be economically (or humanly) valuable – and what skills are needed to make that value work – are at best unclear. The state may have a legitimate interest in the overall mix of subjects provided: it may have an interest as a purchaser where particular skills or expertise will be needed.

    But we also need to admit as a society that we don’t know what we will need, and that learning for the sake of learning has a value that is a little bit harder to measure.

    Source link

  • Collegiality and competition in German Centres of Excellence

    Collegiality and competition in German Centres of Excellence

    by Lautaro Vilches

    Collegiality, although threatened by increasing competitive pressures and described as a slippery and elastic concept, remains a powerful ideal underpinning academic and intellectual practices. Drawing on two empirical studies, this blog examines the relationships between collegiality and competition in Centres of Excellence (CoEs) in the Social Sciences and Humanities (SSH) in Germany. These CoEs are conceptualised as a quasi-departmental new university model that contrasts with the ‘university of chairs’, which characterises the old Humboldtian university model, organised around chairs led by professors. Hence my research question: How do academics experience collegiality, and how does it relate to competition, within CoEs in the SSH?

    In 2006, the government launched the Excellence Strategy (then known as the Excellence Initiative), which includes a scheme providing long-term funding for Centres of Excellence. Notably, this scheme extends beyond the traditionally more collaborative Natural Sciences, to encompass the Social Sciences and Humanities. Germany, therefore, offers a unique case to explore transformations of collegiality amidst co-existing and overlapping university models. What, then, are the key features of these models?

    In the old model of the ‘university of chairs’ the chair constitutes the central organisational unit of the university, with each one led by a single professor. Central to this model is the idea of collegial leadership according to which professors govern the university autonomously, a practice that can be traced back to the old scholastic guild of the Middle Ages. During the eighteenth century, German universities underwent a process of modernisation influenced by Renaissance ideals, culminating in the establishment of University of Berlin in Prussia in 1810 by Wilhelm von Humboldt. By the late nineteenth century, the Humboldtian model of the university had become highly influential, as it offered an organisational template in which the ideals of academic autonomy, academic freedom and the  integration of research and teaching were institutionalised.

    Within the university of chairs, collegiality is effectively ‘contained’ and enacted within individual chairs. In this structure, professors have no formal superiors and academic staff are directly subordinate to a single professor (as chair holder) – not an institute or faculty. As a result, the university of chairs is characterised by several small and steep hierarchies.

    In recent decades – alongside the rise of the United States as the hegemonic power – the Anglo-American departmental model spread across the world, a shift that is associated with the entrepreneurial transformation of universities as they respond to growing competitive pressures.

    Remarkably, CoEs in the SSH in Germany are organised as ‘quasi-departments’ resembling a multidisciplinary Anglo-American department. They are very large in comparison with other collaborative grants, often comprising more than 100 affiliated researchers. They are structured around several ‘Research Areas’ and led by 25 Principal Investigators (mostly professors) who must agree on the implementation of the multidisciplinary and integrated research programme on which the CoE is based.

    The historical implications of this new model cannot be overstated. CoEs appear to operate as Trojan horses: cloaked in the prestige of excellence, they have introduced a fundamentally different organisational model into the German university of chairs, an institution that has endured over centuries.

    Against the backdrop of these two models, what are the implications for collegiality and its relation to competition? A few clarifications are necessary. First, much of the research on collegiality has focused on governance, ignoring that collegiality is also practised ‘on the ground’. Here, I will define collegiality (a) as form of ‘leadership and governance’, involving relations among leaders as well as interactions between leaders and those they govern; (b) as an ‘intellectual practice’ that can be best observed in the enactments of collaborative research; and (c) as a form of ‘citizenship’, involving practices that signify belonging to the CoE and its academic community.

    Second, adopting this broader understanding requires acknowledging that collegiality is not only experienced by professors (in governing collegialy the university) but also by the ‘invisible’ academic demos, namely Early Career Researchers (ECRs). Although often employed in precarious positions, ECRS are nonetheless significant members of the academic community, in particular in CoEs, which explicitly prioritise the training of ECRs as a core objective. Whilst ECRs are committed full time to the CoE and sustain much of its collaborative research activity, professors remain simultaneously bound to the duties of their respective positions as chairs.

    A third clarification concerns our normative assumptions underpinning collegiality and its relationship to competition. Collegiality is sometimes idealised as an unambiguously positive value and practice in academia, whilst competition – in contrast – is seen as a threat to collegiality. However, this idealised depiction tends to underplay, for example, the role of hierarchies in academia and often invokes an indeterminate past – perhaps somewhere in the 1960s – when universities were governed autonomously by male professors and generously funded through block grants – largely protected from competition pressures or external scrutiny.

    These contextual conditions have evidently changed over recent decades: competition, both at institutional and individual terms, has intensified in academia, and CoE schemes exemplify this shift. CoE members, especially ECRs, are therefore embedded in multiple and overlapping competitions: at the institutional level through the CoE’s race for excellence; and at the individual level, through the competition for getting a position in the CoE, as well as for grants, publications, and networks necessary for career advancement.

    How are collegiality and competition intertwined in the CoE? I identify three complex dynamics:

    • ‘The temporal flourishing of intellectual collegiality’ refers to the blooming of collegiality as part of the collaborative research work in the CoE. ECRs describe extensive engagement in organising, leading or co-leading research seminars (alongside PIs or other postdoctoral researchers), co-editing books, developing digital collaborative platforms, inviting researchers from abroad to join the CoE or organising and participating in informal meetings. Within this dynamic, competition is presented as being located ‘outside’ the CoE, temporarily deactivated. However, at the same time, ECRs remain aware of the omnipresence of competition, which ultimately threatens collegial collaboration when career paths, research topics or publications begin to converge. For this reason, intellectual collegiality and competition stand in an exclusionary relationship.
    • ‘The rise of CoE citizenship for the institutional race of excellence’ captures the strong sense of engagement and commitment shown by ECRs (but also professors) towards the CoE. It is expressed through initiatives aimed at enhancing the CoE’s collective research performance, particularly in anticipation of competition for renewed excellence funding. This dynamic reveals that, for the CoE, citizenship and institutional competition are not oppositional but complementary, as collective engagement is mobilised in the service of competitive success.
    • ‘Collegial leadership adapting to multiple competitions’ highlights the plurality of leadership modes, each one responding to different levels and forms of competition. At the level of professors and decision-making processes at the top, traditional collegial governance is ‘overstretched’. Although professors retain full authority, they struggle to reach consensus and to lead these large multidisciplinary centres effectively. This suggests a growing demand for new skills more closely associated with the figure of an academic manager than a professor. The institutional race for excellence thus places considerable strain on collegial governance rooted in the chair-based system. Accordingly, ECRs describe different and, apparently, contradictory modes of collegial leadership. For example, the ‘laissez faire’ mode aligns with the ideals of freedom and autonomy underpinning intellectual collegiality, but also with competition among individuals. They also describe leadership as ‘impositions’, which, on the one hand, erodes trust in professors and decision-making, but, on the other hand, intersects with notions of citizenship that compel ECRs to accept decisions, even when imposed. Yet many ECRs value and expect a more ‘inclusive leadership’ that support the development of intellectual collegiality. Overall, the relationship between collegial leadership and competition is heterogeneous and adaptive, closely intertwined with the preceding dynamics.

    How, then, can these dynamics be interpreted together? Overall, the findings suggest that differences between university models matter profoundly for collegiality. Expectations regarding how academics collaborate, participate in governance and decision-making processes and form intellectual communities are embedded in specific institutional contexts.

    Regarding the relation between collegiality and competition, I suggest two contrasting interpretations. The first emphasises the flourishing of intellectual collegiality and the emergence of CoE citizenship, understood as a collective, multidisciplinary sense of belonging that is driven by – and complementary to – the institutional race for excellence. The second interpretation, however, views this flourishing as a temporal illusion. From this perspective, competition is omnipresent and stands in a fundamentally exclusionary relationship to collegiality: it threatens intellectual collaboration even when temporarily deactivated; it compels academics to engage in CoE-related work they may not intrinsically value; and it overstretches traditional forms of collegial leadership, promoting managerial modes that erode trust in both academic judgement and decision-making processes. Viewed in this light, competition ultimately poses a threat to collegiality. These rival interpretations may uneasily coexist, and the second one possibly predominates. More research is needed on how organisational contexts affect the relationship between collegiality and competition.

    Lautaro Vilches is a researcher at Humboldt University of Berlin and a consultant in higher education. His current research examines the implications of excellence schemes for transforming universities’ organisational arrangements and their effects on academic practices such as collegiality, academic mobility and research collaboration, particularly in the Social Sciences and Humanities. As a consultant he advises universities on advancing strategic change.

    Author: SRHE News Blog

    An international learned society, concerned with supporting research and researchers into Higher Education

    Source link

  • TEF proposals’ radical reconfiguration of quality risk destabilising the sector – here’s the fix

    TEF proposals’ radical reconfiguration of quality risk destabilising the sector – here’s the fix

    The post-16 education and skills white paper reiterates what the Office for Students’ (OfS) recent consultation on the future of the Teaching Excellence Framework (TEF) had already made quite clear: there is a strong political will to introduce a regulatory framework for HE that imposes meaningful consequences on providers whose provision is judged as being of low quality.

    While there is much that could be said about the extent to which TEF is a valid way of measuring quality or teaching excellence, we will focus on the potential unintended consequences of OfS’s proposals for the future of TEF.

    Regardless of one’s views of the TEF in general, it is relatively uncontroversial to suggest that TEF 2023 was a material improvement on its predecessor. In an analysis of the outcomes from the 2017 TEF exercise, it was clear that a huge volume of work had gone into establishing a ranking of providers which was far too closely correlated with the characteristics of their student body.

    Speaking plainly, the optimal strategy for achieving Gold in 2017 was to avoid recruiting too many students from socially and economically disadvantaged backgrounds. In 2017, the 20 providers with the fewest FSM students had no Bronze awards, while the 20 with the highest failed to have any Gold awards associated with their provision.

    Following the changes introduced in the next round of TEF assessments, while there still appears to be a correlation between student characteristics and TEF outcomes, the relationship is not as strong as it was in 2017. Here we have mapped the distribution of TEF 2023 Gold, Silver and Bronze ratings for providers with the lowest (Table 1) and highest (Table 2) proportions of students who have received free school meals (FSM), for TEF 2023.

    In TEF 2023, the link between student characteristics and TEF outcome was less pronounced. This is a genuine improvement, and one we should ensure is not lost under the new proposals for TEF.

    Reconfiguring the conception of quality

    The current TEF consultation proposes radical changes, not least of which is the integration of the regulator’s assessment of compliance with the B conditions of registration which deal with academic quality.

    At present, TEF differentiates between different levels of quality that are all deemed to be above minimum standards – built upon the premise that the UK higher education sector is, on average, “very high quality” in an international context – and operates in parallel with the OfS’s approach to ensuring compliance with minimum standards. The proposal to merge these two aspects of regulation is being posited as a way of reducing regulatory burden.

    At the same time, the OfS – with strong ministerial support – is making clear that it wants to ensure there are regulatory consequences associated with provision that fails to meet their thresholds. And this is where things become more contentious.

    Under the current framework, a provider is technically not eligible to participate in TEF if it is judged by the OfS to fall foul of minimum quality expectations. Consequently, TEF ratings of Bronze, Silver and Gold are taken to correspond with High Quality, Very High Quality and Outstanding provision, respectively. While a fourth category, Requires Improvement, was introduced for 2023, vanishingly few providers were given this rating.

    Benchmarked data on the publicly available TEF dashboard in 2023 were deemed to contribute no more than 50 per cent of the weight in each provider’s aspect outcomes. Crucially, data that was broadly in line with benchmark was deemed – as a starting hypothesis, if you will – to be consistent with a Silver rating: again, reinforcing the message that the UK HE sector is “Very High Quality” on the international stage.

    Remember this, as we journey into the contrasts with proposals for the new TEF.

    Under the proposed reforms, OfS has signalled that providers failing to be of sufficient quality would be subject to regulatory consequences. Such consequences could span from enhanced monitoring to – in extremis – deregistration; such processes and penalties would be led by OfS. We have also received the clear indication that the government may wish to withdraw permission to grow and receive inflation-linked fee increases with quality outcomes. In other words, providers who fail to achieve a certain rating in TEF may experience student number caps and fee freezes.

    These are by no means minor inconveniences for any provider, and so one might reasonably expect that the threshold for implementing such penalties would be set rather high – from the perspectives both of the proportion of the sector that would, in a healthy system, be subject to regulatory action or governmental restriction at any one time, and the operational capacity of the OfS properly to follow through and follow up on the providers that require regulatory intervention. On the contrary, however, it is being proposed that both Requires Improvement- and Bronze-rated providers would be treated as inadequate in quality terms.

    While a provider rated as Requires Improvement might expect additional intervention from the regulator, it seems less obvious why a provider rated Bronze – which was previously defined as a High Quality provider – should expect to receive enhanced regulatory scrutiny and/or restrictions on their operation.

    It’s worse than we thought

    As the sector regulator, OfS absolutely ought to be working to identify areas of non-compliance and inadequate quality. The question is whether these new proposals achieve that aim.

    This proposal amounts to OfS making a fundamental change to the way it conceptualises the very notion of quality and teaching excellence, moving from a general assumption of high quality across the sector to the presumption that there is low quality at a scale hitherto unimagined. While the potential consequences of these proposed reforms are important at the level of an individual provider, and for student and prospective students’ perceptions, it is equally important to ask what they mean for the HE sector as a whole.

    Figure 1 illustrates the way in which the ratings of quality across our sector might change, should the current proposals be implemented. This first forecast is based upon the OfS’s proposal that overall provider ratings will be defined by the lowest of their two aspect ratings, and shows the profile of overall ratings in 2023 had this methodology been applied then.

    There are some important points to note regarding our methodology for generating this forecast. First, as we mentioned above, OfS has indicated an intention to base a provider’s overall rating on the lowest of the two assessed aspects: Student Experience and Student Outcomes. In TEF 2023, providers with mixed aspects, such as Bronze for one and Silver for another, may still have been judged as Silver overall, based on the TEF panel’s overall assessment of the evidence submitted. Under the new framework, this would not be possible, and such a provider would be rated Bronze by default. In addition, we are of course assuming that there has been no shift in metrics across the sector since the last TEF, and so these figures need to be taken as indicative and not definitive.

    Figure 1: Comparison of predicted future TEF outcomes compared with TEF 2023 actual outcomes

    There are two startling points to highlight:

    • The effect of this proposed TEF reform is to drive a downward shift in the apparent quality of English higher education, with a halving of the number of providers rated as Outstanding/Gold, and almost six times the number of providers rated as Requires Improvement.
    • The combined number of Bronze and Requires Improvement Providers would increase from 50 to 89. Taken together with the proposal to reframe Bronze as being of insufficient quality, OfS could be subjecting nearly 40 per cent of the sector to special regulatory measures.

    In short, the current proposals risk serious destabilisation of our sector, and we argue could end up making the very concept of quality in education less, not more, clear for students.

    Analysis by provider type

    Further analysis of this shift reveals that these changes would have an impact across all types of provider. Figures 2a and 2b show the distribution of TEF ratings for the 2023 and projected future TEF exercises, where we see high, medium and low tariff providers, as well as specialist institutions, equally impacted. For the 23 high tariff providers in particular, the changes would see four providers fall into the enhanced regulatory space of Bronze ratings, whereas none were rated less than Silver in the previous exercise. For specialist providers, of the current 42 with 2023 TEF ratings, five would be judged as Requires Improvement, whereas none received this rating in 2023.

    Figure 2a: Distribution of TEF 2023 ratings by provider type

    Figure 2b: Predicted distribution of future TEF ratings by provider type

    Such radical movement in OfS’s overall perception of quality in the sector requires explanation. Either the regulator believes that the current set of TEF ratings were overly generous and the sector is in far worse health than we have assumed (and, indeed, than we have been advising students via current TEF ratings), or else the very nature of what is considered to be high quality education has shifted so significantly that the way we rate providers requires fundamental reform. While the former seems very unlikely, the latter requires a far more robust explanation than has been provided in the current consultation.

    We choose to assume that OfS does not, in fact, believe that the quality of education in English HE has fallen off a cliff edge since 2023, and also that it is not intentionally seeking to radically redefine the concept of high quality education. Rather, in pursuit of a regulatory framework that does carry with it material consequences for failing to meet a robust set of minimum standards, we suggest that perhaps the current proposals have missed an opportunity to make more radical changes to the TEF rating system itself.

    We believe there is another approach that would help the OfS to deliver its intended aim, without destabilising the entire sector and triggering what would appear to be an unmanageable volume of regulatory interventions levelled at nearly 40 per cent of providers.

    Benchmarks, thresholds, and quality

    In all previous iterations of TEF, OfS has made clear that both metrics and wider evidence brought forward in provider and student submissions are key to arriving at judgements of student experience and outcomes. However, the use of metrics has very much been at the heart of the framework.

    Specifically, the OfS has gone to great lengths to provide metrics that allow providers to see how they perform against benchmarks that are tailored to their specific student cohorts. These benchmarks sit alongside the B3 minimum thresholds for key metrics, which OfS expects all providers to achieve. For the most part, providers eligible to enter TEF would have all metrics sitting above these thresholds, leaving the judgement of Gold, Silver and Bronze as a matter of the distance from the provider’s own benchmark.

    The methodology employed in TEF has also been quite simple to understand at a conceptual level:

    • A provider with metrics consistently 2.5 per cent or more above benchmark might be rated as Gold/Outstanding;
    • A provider whose metrics are consistently within ±2.5 per cent of their benchmarks, would be likely assessed as Silver/Very High Quality;
    • Providers who are consistently 2.5 per cent or more below their benchmark would be Bronze/High Quality or Requires Improvement.

    There is no stated numerical threshold that is consistent with the boundary between Bronze and Requires Improvement – a matter of holistic panel judgement, including but not limited to how far beyond -2.5 per cent of benchmark a provider’s data sits.

    It is worth noting here that in the current TEF, Bronze ratings (somewhat confusingly) could only be conferred for providers who could also demonstrate some elements of Silver/Very High Quality provision. Under the new TEF proposals, this requirement would be dropped.

    The challenge we see here is with the definition of Bronze being >2.5 per cent below benchmark; the issue is best illustrated with an example of two hypothetical Bronze providers:

    Let’s assume both Provider A and B have received a Bronze rating in TEF, because their metrics were consistently more than 2.5 per cent below benchmark, and their written submissions and context did not provide any basis on which a higher rating ought to be awarded. For simplicity, let’s pick a single metric, progression into graduate employment, and assume that the benchmark for these two providers happens to be the same, at 78 per cent.

    In this example, Provider A obtained its Bronze rating with a progression figure of 75 per cent, which is 3 per cent below its benchmark. Provider B, on the other hand, had a Progression figure of 63 per cent. While this is a full 12 percentage points worse than Provider A, it is nonetheless still 2 per cent above the minimum threshold specified by OfS, which is 60 per cent, and so it was not rated as Requires Improvement.

    Considering this example, it seems reasonable to conclude that Provider A is doing a far better job of supporting a comparable cohort of students into graduate employment than Provider B, but under the new TEF proposals, both are judged as being Bronze, and would be subject to the same regulatory penalties proposed in the consultation. From a prospective student’s perspective, it is hard to see what value these ratings would carry, given they conceal very large differences in the actual performance of the providers.

    On the assumption that the Requires Improvement category would be retained for providers with more serious challenges – such as being below minimum thresholds in several areas – the obvious problem is that Bronze as a category in the current proposal is simply being stretched so far, it will lose any useful meaning. In short, the new Bronze category is too blunt a tool.

    An alternative – meet Meets Minimum Requirements

    As a practical solution, we recommend that OfS considers a fifth category, sitting between Bronze and Requires Improvement: a category of Meets Minimum Requirements.

    This approach would have two advantages. First, it would allow the continued use of Bronze, Silver and Gold in such a way that the terms retain their commonly understood meanings; a Bronze award, in common parlance, is not a mark of failure. Second, it would allow OfS to distinguish providers who, while below our benchmark for Very High Quality, are still within a reasonable distance of their benchmark such that a judgement of High Quality remains appropriate, from those whose gap to benchmark is striking and could indicate a case for regulatory intervention.

    The judgement of Meets Minimum Requirements would mean the provider’s outcomes do not fall below the absolute minimum thresholds set by the regulator, but equally are too far from their benchmark to be awarded a quality kitemark of at least a Bronze TEF rating. The new category would reasonably be subject to increased regulatory surveillance, given the borderline risk of thus rated providers failing to meet minimum standards in future.

    We argue that such a model would be far more meaningful to students and other stakeholders. TEF ratings of Bronze, Silver and Gold would continue to represent an active recognition of High, Very High, and Outstanding quality, respectively. In addition, providers meeting minimum requirements (but not having earned a quality kitemark in the form of a TEF award) would be distinguishable from providers who would be subject to active intervention from the regulator, due to falling below the absolute minimum standards.

    It would be a matter for government to consider whether providers deemed to be meeting minimum requirements should receive inflation-linked uplifts in fees, and should be permitted to grow; indeed, one constructive use of the increased grading nuance we propose here could be that providers who meet minimum requirements are subject to student number caps until they can demonstrate capability to grow safely by improving to the point of earning at least a Bronze TEF award. Such a measure would seem proportionately protective of the student interest, while still differentiating those providers from providers who are actively breaching their conditions of registration and would be subject to direct regulatory intervention.

    Modelling the impact

    To model how this proposed approach might impact overall outcomes in a future TEF, we have, in the exercise that follows, used TEF 2023 dashboard data and retained the statistical definitions of Gold (>2.5 per cent above benchmark) and Silver (±2.5% of benchmark) from the current TEF. We have modelled a proposed definition of Bronze as between 2.5-5 per cent below benchmark. Providers who Meet Minimum Requirements are defined as being within 5-10 per cent below benchmark, and Requires Improvement reflects metrics >10 per cent below benchmark.

    For the sake of simplicity, we have taken the average distance from benchmark for all Student Experience and Student Outcomes metrics for each provider to categorise providers for each Aspect Rating. The outcome of our analysis is shown in Table A, and is contrasted in Table B with an equivalent analysis under OfS’s current proposals to redefine a four-category framework.

    Table A. Distribution of aspect ratings according to a five-category TEF framework

    Table B. Distribution of aspect ratings according to OfS’s proposed four-category TEF framework

    Following OfS’s proposal that a provider would be given an overall rating that reflects the lowest rating of the two aspects, our approach leads to a total of 32 providers falling into the Meets Minimum Requirements and Requires Improvement categories. This represents 14 per cent of providers, which is substantially fewer than the 39 per cent of providers who would be considered as not meeting high quality expectations under the current OfS proposals. It is also far closer to the 22 per cent of providers who were rated Bronze or Requires Improvement in TEF 2023.

    We believe that our approach represents a far more valid and meaningful framework for assessing quality in the sector, while OfS’ current proposals risk sending a problematic message that, since 2023, quality across the sector has inexplicably and catastrophically declined. Adding granularity to the ratings system in this way will help OfS to focus its regulatory surveillance where it will likely be the most useful in targeting provision that is of potentially low quality.

    Figure 4, below, illustrates the distribution of potential TEF outcomes based on OfS’s four category rating framework, contrasted with our proposed five categories. It is important to note that this modelling is based purely on metrics and benchmarks, and does not incorporate the final judgement of TEF panels, based on the narrative submissions providers submit.

    This is particularly important because previous analysis has shown that many providers with metrics that were not significantly above benchmark, or not significantly at benchmark, were nonetheless awarded Gold or Silver ratings, respectively, and this would have been based on robust narrative submissions and other evidence submitted by providers. Equally, some providers with data that was broadly in line with benchmark were awarded Bronze ratings overall, as the further evidence submitted in the narrative statements failed to convince the panel of an overall picture of very high quality.

    Figure 4: Predicted profile of provider ratings in a four- and five-category framework

    The benefits of a five-category approach

    First, the concept of a TEF award in the form of a Gold, Silver or Bronze rating retains its meaning for students and other stakeholders. Any of these three awards reflect something positive about a provider delivering beyond what we minimally expect.

    Second, the pool of providers potentially falling into categories that would prompt enhanced scrutiny and potential regulatory intervention/governmental restrictions would drop to a level that would be a much fairer reflection of the actual quality of our sector. We simply do not believe it to be the case that anyone can be convinced that as much as 40 per cent of our sector is not of sufficiently high quality.

    Third, referencing the socio-economic diversity data by 2023 TEF award in Tables 1 and 2, and the future TEF outcomes modelling in Figure 1, our proposal significantly reduces the risk that students who were previously eligible for free school meals (who form strong proportions of the cohorts of Bronze-rated providers) would be further disadvantaged by their HE environment being impoverished via fee freezes and student number caps. We argue that such potential measures should be reserved for the Requires Improvement, and, plausibly, Meets Minimum Requirements categories.

    Fourth, by expanding the range of categories, OfS would be able to distinguish to between providers who are in fact meeting minimum expectations, but not delivering quality in experience or outcomes which would allow them to benefit from some of the freedoms proposed to be associated with TEF awards, and providers who are, in at least one of these areas, failing to meet even those minimum expectations.

    To recap, the key features of our proposal are as follows:

    • Retain Bronze, Silver and Gold in the TEF as ratings that reflect a positive judgement of High, Very High, and Outstanding quality, respectively.
    • Introduce a new rating – Meets Minimum Requirements – that recognises providers who are delivering student experience and outcomes that are above regulatory minimum thresholds, but are too far from benchmarks to justify an active quality award in TEF. This category would be subject to increased OfS surveillance, given the borderline risk of provision falling below minimum standards in future.
    • Retain Requires Improvement as a category that indicates a strong likelihood that regulatory intervention is required to address more serious performance issues.
    • Continue to recognise Bronze ratings as a mark of High Quality, and position the threshold for additional regulatory restrictions or intervention such that these would apply only to providers rated as Meets Minimum Requirements or Requires Improvement.

    Implementing this modest adaptation to the current TEF proposals would safeguard the deserved reputation of UK higher education for high-quality provision, while meeting the demand for a clear plan to secure improvements to quality and tackle pockets of poor quality.

    The deadline for responding to OfS’ consultation on TEF and the integrated approach to quality is Thursday 11 December. 

    Source link

  • What external examiners do and why it matters

    What external examiners do and why it matters

    Within the big visions presented in the Post-16 Education and Skills White Paper, a specific element of the academic standards landscape has found itself in the spotlight: external examining.

    Within a system predicated on the importance of academic freedom and academic judgement, where autonomous institutions develop their own curricula, external examining provides a crucial UK-wide, peer-led quality assurance mechanism supporting academic standards.

    It assures students their work has been marked fairly and reassures international stakeholders degrees from each UK nation have consistent academic standards.

    So when a Minister describes the system as “inward-focused” and questions its objectivity and consistency, as Jacqui Smith did at Wonkhe’s Festival of Higher Education, the sector needs to respond.

    What external examiners actually do

    External examiners typically review a sample of student work to check that marking criteria and internal moderation processes have been correctly applied and therefore that it has been graded appropriately. They comment on the design, rigour and academic level of assessments, provide external challenge to course teams, and identify good practice and innovation. External examiner reports are escalated through the relevant academic governance processes within a provider, forming a foundation for critical self-reflection on the institution’s maintenance of academic standards.

    Education policy may be devolved, but the systems and infrastructure that maintain academic standards of UK degrees are UK-wide: external examiners frequently examine institutions across UK nation borders. Indeed, the system is also embedded in the Republic of Ireland, with Irish providers drawing some of their external examiners from the UK pool, of which England is the largest source. The system is also intertwined with the work of PSRBs. External examiner reports are often used by PSRBs in their own assurance and accreditation processes, with some PSRBs appointing and managing external examiners directly.

    Tale as old as time

    Scepticism of the system is not new. Over the last quarter of a century, there have been periodic reviews in response to critiques. The most recent of these system reviews was undertaken by QAA in 2022 in partnership with UUK, Guild HE and what is now the Quality Council for UK Higher Education.

    The review compiled insight from a survey across 44 institutions and over 100 external examiners and senior quality professionals, roundtables with 170 individuals from across the sector, in addition to workshops with PSRBs and students.

    It surfaced the importance of the system in maintaining the reputation of UK degrees through impartial scrutiny and triangulation of practice, especially when we know that international audiences view it as an important extra layer of assurance.

    And institutions value the critical friendship provided, and the challenge to course teams which is not always achieved through other routes. External examiner feedback is consistently seen as important in enhancing teaching delivery and assessment practices, as well as upholding due process and internal consistency.

    But our review also revealed thorny problems. The roles can be ambiguously defined, leading to confusion about whether examiners are expected to audit processes, assess standards, or act as enhancement partners. Standards can be interpreted and applied inconsistently – and institutional approaches to examiner engagement, training, and reporting can differ widely. Examiners often reported inadequate support from their home institutions, poor remuneration, and limited recognition for their work.

    To respond to these problems, QAA developed external examining principles, and guidance on how they should be implemented. These principles represented a UK-wide sector agreement on the role and responsibilities of external examiners, bringing a consistent and refreshed understanding across the nations.

    Where do we go from here

    Given its embedded, UK-wide nature, the Westminster government will need to tread carefully and collaboratively in any “review” of the system. A unilateral choice to ditch the system in England would have significant implications. It would impact upon the experience and currency of the pool of external examiner expertise available across the rest of the British Isles, and would undermine the network of general reciprocity on which the system (like that for the peer review of research) is based.

    It would also impact those PSRBs whose accreditation requirements rely on external examiner reports, and in some cases the ability to appoint their own external examiners to courses. To mitigate these risks, work should focus on further strengthening the system to address the English minister’s concerns. This should be sector-led.

    St Mary’s University Twickenham’s recent degree algorithm report demonstrated that sector-led initiatives into these topics do lead to changes within institutional practice; their decision to review their algorithm practice in 2021 was in response to QAA’s work on the appropriate design of degree algorithms, done in conjunction with UUK and GuildHE through the UK Quality Council.

    Using the same model, the Westminster government could work through the UK Quality Council to instigate a sector-led UK-wide review by QAA of how well the 2022 External Examining principles have been implemented across the sector since their creation. This would identify barriers in implementing the principles and surface where further work is needed. The barriers may be as simple as a lack of awareness, or might reveal more systemic challenges around an institution’s ability to encourage independent externals to follow a standardised approach.

    This review could result in updating the principles or proposing more radical solutions to address the system’s weaknesses. Crucially, this mechanism would incorporate the devolved governments and funder regulators, ensuring any changes are done with them, not despite them.

    An external red herring?

    The apparent link between external examining and concerns over grade inflation must also be interrogated. QAA’s 2022 research found that only a third of external examiners were asked by institutions to comment on degree algorithms, and indeed further conversations with quality professionals suggested that it was not perceived as appropriate for external examiners to pass comments on those algorithms. Either that needs to change, or the sector needs to demonstrate that scrutinising external examining in response to grade inflation concerns is like changing the curtains because the roof is leaking.

    If the core Government concern really is grade inflation, then perhaps another sector-led progress review against the UK sector’s 2019 Statement of Intent’could be in order. This could look at the sector’s continued engagement with the guidance around producing degree outcome statements, the principles for effective degree algorithm design, and the outcome classification descriptors in the frameworks for higher education qualifications to address broader concerns around grade inflation in a way that is truly UK wide.

    One nation’s government extricating itself from these interwoven, mutually reinforcing systems risks undermining the whole thing. It would be another enormous and eminently avoidable risk to the UK-ness of a sector that continues to be seen as one entity to anyone outside of the hallowed halls of domestic higher education policy.

    The best way therefore to preserve the continuation of a system that is deeply valued by institutions across the UK is for the sector to lead the critical self-reflection itself, identify its value and merits, and address its weaknesses, preventing a painful fracturing of the ways that academic standards are maintained across the UK.

    This will ensure that degrees awarded by institutions in each UK nation remain trusted and comparable. As a result, governments, students, and international stakeholders can continue to have confidence in the standards of UK degrees.

    Source link

  • Quality assurance behind the dashboard

    Quality assurance behind the dashboard

    The depressing thing about the contemporary debate on the quality of higher education in England is how limited it is.

    From the outside, everything is about structures, systems, and enforcement: the regulator will root out “poor quality courses” (using data of some sort), students have access to an ombuds-style service in the Office for the Independent Adjudicator, the B3 and TEF arrangements mean that regulatory action will be taken. And so on.

    The proposal on the table from the Office for Students at the moment doubles down on a bunch of lagging metrics (continuation, completion, progression) and one limited lagging measure of student satisfaction (NSS) underpinning a metastasised TEF that will direct plaudits or deploy increasingly painful interventions based on a single precious-metal scale.

    All of these sound impressive, and may give your academic registrar sleepless nights – but none of them offer meaningful and timely redress to the student who has turned up for a 9am lecture to find that nobody has turned up to deliver it – again. Which is surely the point.

    It is occasionally useful to remember how little this kind of visible sector level quality assurance systems have to do with actual quality assurance as experienced by students and others, so let’s look at how things currently work and break it down by need state.

    I’m a student and I’m having a bad time right now

    Continuation data and progression data published in 2025 reflects the experience of students who graduated between 2019 and 2022; completion data refers to cohorts between 2016 and 2019; the NSS reflects the opinions of final year students and is published the summer after they graduate. None of these contain any information about what is happening in labs, lecture theatres, and seminar rooms right now.

    As students who have a bad experience in higher education don’t generally get the chance to try it again, any useful system of quality assurance needs to be able to help students in the moment – and the only realistic way that this can happen is via processes within a provider.

    From the perspective of the student the most common of these are module feedback (the surveys conducted at the end of each unit of teaching) and the work of the student representative (a peer with the ability to feedback on behalf of students). Beyond this students have the ability to make internal complaints, ranging from a quiet word with the lecturer after the seminar to a formal process with support from the Students’ Union.

    While little national attention has been paid in recent years to these systems and pathways they represent pretty much the only chance that an issue students are currently facing can be addressed before it becomes permanent.

    The question needs to be whether students are aware of these routes and feel confident in using them – it’s fair to say that experience is mixed across the sector. Some providers are very responsive to the student voice, others may not be as quick or as effective as they should be. Our only measure of these things is via the National Student Survey – about 80 per cent of the students in the 2025 cohort agree that students’ opinions about their course are valued by staff, while a little over two-thirds agree that it is clear that student feedback is acted upon.

    Both these are up on equivalent questions about five years ago, suggesting a slow improvement in such work, but there is scope for such systems to be reviewed and promoted nationally – everything else is just a way for students to possibly seek redress long after anything could be done about it.

    I’m a graduate and I don’t know what my degree is worth/ I’m an employer and I need graduate skills

    The value of a degree is multifaceted – and links as much to the reputation of a provider or course as to the hard work of a student.

    On the former much the heavy lifting is done by the way the design of a course conforms to recognised standards. For more vocational courses, these are likely to have been set by professional, statutory, and regulatory bodies (PSRBs) – independent bodies who set requirements (with varying degrees of specificity) around what should be taught on a course and what a graduate should be capable of doing or understanding.

    Where no PSRB exists, course designers are likely to map to the QAA Subject Benchmarks, or to draw on external perspectives from academics in other universities. As links between universities and local employment needs solidify, the requirements set by local skills improvement plans (LSIPs) will play a growing part – and it is very likely that these will be mapped to the UK Standard Skills Classification descriptors.

    The academic standing of a provider is nominally administered by the regulator – in England the Office for Students has power to deregister a provider where there are concerns, making it ineligible for state funding and sparking a media firestorm that will likely torch any remaining residual esteem. Events like this are rare – standards are generally maintained via a semi-formal system of cross-provider benchmarking and external examination, leavened by the occasional action of whistleblowers.

    That’s also a pretty good description about how we assure that the mark a graduate awarded makes sense when compared to the marks awarded to other graduates. External examiners here play a role in ensuring that standards are consistent within a subject, albeit usually at module rather than course level; it’s another system that has been allowed (and indeed actively encouraged) to atrophy, but it still remains the only way of doing this stuff in anything approaching real time.

    I’m an international partner and I can’t be sure that these qualifications align with what we do

    Collaborating internationally, or even studying internationally, often requires some very specific statements around the quality of provision. One popular route to doing this is being able to assert that your provider meets well-understood international standards – the ESG (standards and guidelines for quality assurance in the European Higher Education Area) represent probably the most common example.

    Importantly, the ESG does not set standards about teaching and learning, or awarding qualifications – it sets standards for the way institutional quality assurance processes are assessed by national bodies. If you think that this is incredibly arm’s length you would be right, but it is also the only way of ensuring that the bits of quality assurance that interface with the student experience in near-real-time actually work.

    I am an academic and I want to design courses and teach students in ways that help students to succeed

    Quality enhancement – beyond compliance with academic standards – is about supporting academic staff in making changes to teaching and learning practice (how lectures are delivered, how assessments are designed, how individual support is offered). It is often seen as an add-on, but should really be seen as a core component of any system of quality assurance. Indeed, in Scotland, regulatory quality assurance in the form of the Tertiary Quality Enhancement Framework starts from the premise that tertiary provision needs to be “high quality” and “improving”.

    Outside of Scotland the vestiges of a previous UK wide approach to quality enhancement exists in the form of AdvanceHE. Many academic staff will first encounter the principles and practice of teaching quality enhancement via developing a portfolio to submit for fellowship – increasingly a prerequisite for academic promotions. AdvanceHE also supports standards which are designed to underpin training in teaching for new academic staff, and support networks. The era of institutional “learning and teaching offices” (another vestige of a previous government-sponsored measure to support enhancement) is mostly over, but many providers have networks of staff with an interest in the practice of teaching in higher education.

    So what does the OfS actually do?

    In England, the Office for Students operates a deficit model of quality assurance. It assumes that, unless there is some evidence to the contrary, an institution is delivering higher education at an appropriate level of quality. Where the evidence exists for poor performance, the regulator will intervene directly. This is the basis of a “risk based” approach to quality assurance, where more effort can be expended in areas of concern and less burden placed on providers.

    For a system like this to work in a way that addresses any of the needs detailed above, OfS would need far more, and more detailed, information on where things are going wrong as soon as they happen. It would need to be bold in acting quickly, often based on incomplete or emerging evidence. Thus far, OfS has been notably adverse to legal risk (having had its fingers burned by the Bloomsbury case), and has failed (despite a sustained attempt in the much-maligned Data Futures) to meaningfully modernise the process of data collection and analysis.

    It would be simpler and cheaper for OfS to support and develop institutions’ own mechanisms to support quality and academic standards – an approach that would allow for student issues to be dealt with quickly and effectively at that level. A stumbling block here would be the diversity of the sector, with the unique forms and small scale of some providers making it difficult to design any form of standardisation into these systems. The regulator itself, or another body such as the Office for the Independent Adjudicator (as happens now), would act as a backstop for instances where these processes do not produce satisfactory results.

    The budget of the Office for Students has grown far beyond the ability of the sector to support it (as was originally intended) via subscription. It receives more than £10m a year from the Department for Education to cover its current level of activity – it feels unlikely that more funds will arrive from either source to enable it to quality assure 420 providers directly.

    All of this would be moot if there were no current concerns about quality and standards. And there are many – stemming both from corners being cut (and systems being run beyond capacity) due to financial pressures, and from a failure to regulate in a way that grows and assures a provider’s own capacity to manage quality and standards. We’ve seen evidence from the regulator itself that the combination of financial and regulatory failures has led to many examples of quality and standards problems: course and modules closed without suitable alternatives for students, difficulties faced by students in accessing staff and facilities due to overcrowding or underprovision, and concerns about an upward pressure on marks from a need to bolster continuation and completion rates.

    The route through the current crisis needs to be through improvement in providers’ own processes, and that would take something that the OfS has not historically offered the sector: trust.

    Source link

  • Reclaiming the narrative of educational excellence despite the decline of educational gain

    Reclaiming the narrative of educational excellence despite the decline of educational gain

    There was a time when enhancement was the sector’s watchword.

    Under the Higher Education Funding Council for England (HEFCE), concepts like educational gain captured the idea that universities should focus not only on assuring quality, but on improving it. Teaching enhancement funds, learning and teaching strategies, and collaborative initiatives flourished. Today, that language has all but disappeared. The conversation has shifted from enhancement to assurance, from curiosity to compliance. Educational gain has quietly declined, not as an idea, but as a priority.

    Educational gain was never a perfect concept. Like its cousin learning gain, it struggled to be measured in ways that were meaningful across disciplines, institutions, and student journeys. Yet its value lay less in what it measured than in what it symbolised. It represented a shared belief that higher education is about transformation: the development of knowledge, capability, and identity through the act of learning. It reminded us that the student experience was not reducible to outcomes, but highly personal, developmental, and distinctive.

    Shifting sands

    The shift from HEFCE to the Office for Students (OfS) marked more than a change of regulator; it signalled a change in the state’s philosophy, from partnership to performance management. The emphasis moved from enhancement to accountability. Where HEFCE invested in collaborative improvement, OfS measures and monitors. Where enhancement assumed trust in the professional judgement of universities and their staff, regulation presumes the need for assurance through metrics. This has shaped the sector’s language: risk, compliance, outcomes, baselines – all necessary, perhaps, but narrowing.

    The latest OfS proposals on revising the Teaching Excellence Framework mark a shift in their treatment of “educational gain.” Rather than developing new measures or asking institutions to present their own evidence of gain, OfS now proposes removing this element entirely, on the grounds that it produced inconsistent and non-comparable evidence. This change is significant: it signals a tighter focus on standardised outcomes indicators. Yet by narrowing the frame in this way, we risk losing sight of the broader educational gains that matter most to students, gains that are diverse, contextual, and resistant to capture through a uniform set of metrics. It speaks to a familiar truth: “not everything that counts can be counted, and not everything that can be counted counts”.

    And this narrowing has consequences. When national frameworks reduce quality to a narrow set of indicators, they risk erasing the very distinctiveness that defines higher education. Within a framework of uniform metrics, where does the space remain for difference, for innovation, for the unique forms of learning that make higher education a rich and diverse ecosystem? If we are all accountable to the same measures, it becomes even more important that we define for ourselves what excellence in education looks like, within disciplines, within institutions, and within the communities we serve.

    Engine room

    This is where the idea of enhancement again becomes critical. Enhancement is the engine of educational innovation: it drives new methods, new thinking, and the continuous improvement of the student experience. Without enhancement, innovation risks becoming ornamental: flashes of good practice without sustained institutional learning. The loss of “educational gain” as a guiding idea has coincided with a hollowing out of that enhancement mindset. We have become good at reporting quality, but less confident in building it.

    Reclaiming the narrative of excellence is, therefore, not simply about recognition and reward; it is about re-establishing the connection between excellence and enhancement. Excellence is what we value, enhancement is how we realise it. The Universitas 21 project Redefining Teaching Excellence in Research-Intensive Universities speaks directly to this need. It asks: if we are to value teaching as we do research, how do we define excellence on our own terms? What does excellence look like in an environment where metrics are shared but missions are not?

    For research-intensive universities in particular, this question matters. These institutions are often defined by their research outputs and global rankings, yet they also possess distinctive educational strengths: disciplinary depth, scholarly teaching, and research-informed curricula. Redefining teaching excellence means articulating those strengths clearly, and ensuring they are recognised, rewarded, and shared. It also means returning to the principle of enhancement: a commitment to continual improvement, collegial learning, and innovation grounded in scholarship.

    Compass point

    The challenge, and opportunity, for the sector is to rebuild the infrastructure that once supported enhancement. HEFCE-era initiatives, from the Subject Centres to the Higher Education Academy, created national and disciplinary communities of practice. They gave legitimacy to innovation and space for experimentation. The dismantling of that infrastructure has left many educators working in isolation, without the shared structures that once turned good teaching into collective progress. Reclaiming enhancement will require new forms of collaboration, cross-institutional, international, and interdisciplinary, that enable staff to learn from one another and build capacity for educational change.

    If educational gain as a metric was flawed, educational gain as an ambition is not. It reminds us that the purpose of higher education is not only to produce measurable outcomes but to foster human and intellectual development. It is about what students become, not just what they achieve. As generative AI reshapes how students learn and how knowledge itself is constructed, this broader conception of gain becomes more vital than ever. In this new context, enhancement is about helping students, and staff, to adapt, to grow, and to keep learning.

    So perhaps it is time to bring back “educational gain,” not as a measure, but as a mindset; a reminder that excellence in education cannot be mandated through policy or reduced to data. It must be defined and driven by universities themselves, through thoughtful design, collaborative enhancement, and continual renewal.

    Excellence is the destination, but enhancement is the journey. If we are serious about defining one, we must rediscover the other.

    Source link

  • The Office for Students steps on to shaky ground in an attempt to regulate academic standards

    The Office for Students steps on to shaky ground in an attempt to regulate academic standards

    The funny thing about the story about today’s intervention by the Office for Students is that it is not really about grade inflation, or degree algorithms.

    I mean, it is on one level: we get three investigation reports on providers related to registration condition B4, and an accompanying “lessons learned” report that focuses on degree algorithms.

    But the central question is about academic standards – how they are upheld, and what role an arm of the government has in upholding them.

    And it is about whether OfS has the ability to state that three providers are at “increased risk” of breaching a condition of registration on the scant evidence of grade inflation presented.

    And it is certainly about whether OfS is actually able to dictate (or even strongly hint at its revealed preferences on) the way degrees are awarded at individual providers, or the way academic standards are upheld.

    If you are looking for the rule book

    Paragraph 335N(b) of the OfS Regulatory Framework is the sum total of the advice it has offered before today to the sector on degree algorithms.

    The design of the calculations that take in a collection of module marks (each assessed carefully against criteria set out in the module handbook, and cross-checked against the understanding of what expectations of students should be offered by an academic from another university) into an award of a degree at a given classification is a potential area of concern:

    where a provider has changed its degree classification algorithm, or other aspects of its academic regulations, such that students are likely to receive a higher classification than previous students without an increase in their level of achievement.

    These circumstances could potentially be a breach of condition of registration B4, which relates to “Assessment and Awards” – specifically condition B4.2(c), which requires that:

    academic regulations are designed to ensure that relevant awards are credible;

    Or B4.2(e), which requires that:

    relevant awards granted to students are credible at the point of being granted and when compared to those granted previously

    The current version of condition B4 came into force in May 2022.

    In the mighty list of things that OfS needs to have regard to that we know and love (section 2 of the 2017 Higher Education and Research Act), we learn that OfS has to pay mind to “the need to protect the institutional autonomy of English higher education providers” – and, in the way it regulates that it should be:

    Transparent, accountable, proportionate, and consistent and […] targeted only at cases where action is needed

    Mutant algorithms

    With all this in mind, we look at the way the regulator has acted on this latest intervention on grade inflation.

    Historically the approach has been one of assessing “unexplained” (even once, horrifyingly, “unwarranted”) good honours (1 or 2:1) degrees. There’s much more elsewhere on Wonkhe, but in essence OfS came up with its own algorithm – taking into account the degrees awarded in 2010-11 and the varying proportions students in given subject areas, with given A levels and of a given age – that starts from the position that non-traditional students shouldn’t be getting as many good grades as their (three good A level straight from school) peers, and if they did then this was potentially evidence of a problem.

    To quote from annex B (“statistical modelling”) of last year’s release:

    “We interact subject of study, entry qualifications and age with year of graduation to account for changes in awarding […] our model allows us to statistically predict the proportion of graduates awarded a first or an upper second class degree, or a first class degree, accounting for the effects of these explanatory variables.

    When I wrote this up last year I did a plot of the impact each of these variables is expected to have on – the fixed effect coefficient estimates show the increase (or decrease) in the likelihood of a person getting a first or upper second class degree.

    [Full screen]

    One is tempted to wonder whether the bit of OfS that deals with this issue ever speaks to the bit that is determined to drive out awarding gaps based on socio-economic background (which, as we know, very closely correlates with A level results). This is certainly one way of explaining why – if you look at the raw numbers – the people who award more first class and 2:1 degrees are the Russell Group, and at small selective specialist providers.

    [Full screen]

    Based on this model (which for 2023-24 failed to accurately predict fully fifty per cent of the grades awarded) OfS selected – back in 2022(!) – three providers where it felt that the “unexplained” awards had risen surprisingly quickly over a single year.

    What OfS found (and didn’t find)

    Teesside University was not found to have ever been in breach of condition B4 – OfS was unable to identify statistically significant differences in the proportion of “good” honours awarded to a single cohort of students if it applied each of the three algorithms Teesside has used over the past decade or so. There has been – we can unequivocally say – no evidence of artificial grade inflation at Teesside University.

    St Mary’s University, Twickenham and the University of West London were found to have historically been in breach of condition B4. The St Mary’s issue related to an approach that was introduced in 2016-17 and was replaced in 2021-22, in West London the offending practice was introduced in 2015-16 and replaced in 2021-22. In both cases, the replacement was made because of an identified risk of grade inflation. And for each provider a small number of students may have had their final award calculated using the old approach since 2021-22, based on a need to not arbitrarily change an approach that students had already been told about.

    To be clear – there is no evidence that either university has breached condition B4 (not least because condition B4 came into force after the offending algorithms had been replaced). In each instance the provider in question has made changes based on the evidence it has seen that an aspect of the algorithm is not having the desired effect, exactly the way in which assurance processes should (and generally do) work.

    Despite none of the providers in question currently being in breach of B4 all three are now judged to be at an increased risk of breaching condition B4.

    No evidence has been provided as to why these three particular institutions are at an “increased risk” of a breach while others who may use substantially identical approaches to calculating final degree awards (but have not been lucky enough to undergo an OfS inspection on grade inflation) are not. Each is required to conduct a “calibration exercise” – basically a review of their approach to awarding undergraduate degrees of the sort each has already completed (and made changes based on) in recent years.

    Vibes-based regulation

    Alongside these three combined investigation/regulatory decision publications comes a report on Batchelors’ degree classification algorithms. This purports to set out the “lessons learned” from the three reports, but it actually sets up what amounts to a revision to condition B4.

    We recognise that we have not previously published our views relating to the use of algorithms in the awarding of degrees. We look forward to positive engagement with the sector about the contents of this report. Once the providers we have investigated have completed the actions they have agreed to undertake, we may update it to reflect the findings from those exercises.

    The important word here is “views”. OfS expresses some views on the design of degree algorithms, but it is not the first to do so and there are other equally valid views held by professional bodies, providers, and others – there is a live debate and a substantial academic literature on the topic. Academia is the natural home of this kind of exchange of views, and in the crucible of scholarly debate evidence and logical consistency are winning moves. Having looked at every algorithm he could find, Jim Dickinson covers the debates over algorithm characteristics elsewhere on the site.

    It does feel like these might be views expressed ahead of a change to condition B4 – something that OfS does have the power to do, but would most likely (in terms of good regulatory practice, and the sensitive nature of work related to academic standards managed elsewhere in the UK by providers themselves) be subject to a full consultation. OfS is suggesting that it is likely to find certain practices incompatible with the current B4 requirements – something which amounts to a de facto change in the rules even if it has been done under the guise of guidance.

    Providers are reminded that (as they are already expected to do) they must monitor the accuracy and reliability of current and future degree algorithms – and there is a new reportable event: providers need to tell OfS if they change their algorithm that may result in an increase of “good” honours degrees awarded.

    And – this is the kicker – when they do make these changes, the external calibration they do cannot relate to external examiner judgements. The belief here is that external examiners only ever work at a module level, and don’t have a view over an entire course.

    There is even a caveat – a provider might ask a current or former external examiner to take an external look at their algorithm in a calibration exercise, but the provider shouldn’t rely solely on their views as a “fresh perspective” is needed. This reads back to that rather confusing section of the recent white paper about “assessing the merits of the sector continuing to use the external examiner system” while apparently ignoring the bit around “building the evidence base” and “seeking employers views”.

    Academic judgement

    Historically, all this has been a matter for the sector – academic standards in the UK’s world-leading higher education sector have been set and maintained by academics. As long ago as 2019 the UK Standing Committee for Quality Assessment (now known as the Quality Council for UK Higher Education) published a Statement of Intent on fairness in degree classification.

    It is short, clear and to the point: as was then the fashion in quality assurance circles. Right now we are concerned with paragraph b, which commits providers to protecting the value of their degrees by:

    reviewing and explaining how their process for calculating final classifications, fully reflect student attainment against learning criteria, protect the integrity of classification boundary conventions, and maintain comparability of qualifications in the sector and over time

    That’s pretty uncontroversial, as is the recommended implementation pathway in England: a published “degree outcomes statement” articulating the results of an internal institutional review.

    The idea was that these statements would show the kind of quantitative trends that OfS get interested in, some assurance that these institutional assessment processes meet the reference points, and reflect the expertise and experience of external examiners, and provide a clear and publicly accessible rationale for the degree algorithm. As Jim sets out elsewhere, in the main this has happened – though it hasn’t been an unqualified success.

    To be continued

    The release of this documentation prompts a number of questions, both on the specifics of what is being done and more widely on the way in which this approach does (or does not) constitute good regulatory practice.

    It is fair to ask, for instance, whether OfS has the power to decide that it has concerns about particular degree awarding practices, even where it is unable to point to evidence that these practices are currently having a significant impact on degrees awarded, and to promote a de facto change in interpretation of regulation that will discourage their use.

    Likewise, it seems problematic that OfS believes it has the power to declare that the three providers it investigated are at risk of breaching a condition of registration because they have an approach to awarding degrees that it has decided that it doesn’t like.

    It is concerning that these three providers have been announced as being at higher risk of a breach when other providers with similar practices have not. It is worth asking whether this outcome meets the criteria for transparent, accountable, proportionate, and consistent regulatory practice – and whether it represents action being targeted only at cases where it is demonstrably needed.

    More widely, the power to determine or limit the role and purpose of external examiners in upholding academic standards has not historically been one held by a regulator acting on behalf of the government. The external examiner system is a “sector recognised standard” (in the traditional sense) and generally commands the confidence of registered higher education providers. And it is clearly a matter of institutional autonomy – remember in HERA OfS needs to “have regard to” institutional autonomy over assessment, and it is difficult to square this intervention with that duty.

    And there is the worry about the value and impact of sector consultation – an issue picked up in the Industry and Regulators Committee review of OfS. Should a regulator really be initiating a “dialogue with the sector” when its preferences on the external examiner system are already so clearly stated? And it isn’t just the sector – a consultation needs to ensure that the the views of employers (and other stakeholders, including professional bodies) are reflected in whatever becomes the final decision.

    Much of this may become clear over time – there is surely more to follow in the wider overhaul of assurance, quality, and standards regulation that was heralded in the post-16 white paper. A full consultation will help centre the views of employers, course leaders, graduates, and professional bodies – and the parallel work on bringing the OfS quality functions back into alignment with international standards will clearly also have an impact.

    Source link

  • Why busy educators need AI with guardrails

    Why busy educators need AI with guardrails

    Key points:

    In the growing conversation around AI in education, speed and efficiency often take center stage, but that focus can tempt busy educators to use what’s fast rather than what’s best. To truly serve teachers–and above all, students–AI must be built with intention and clear constraints that prioritize instructional quality, ensuring efficiency never comes at the expense of what learners need most.

    AI doesn’t inherently understand fairness, instructional nuance, or educational standards. It mirrors its training and guidance, usually as a capable generalist rather than a specialist. Without deliberate design, AI can produce content that’s misaligned or confusing. In education, fairness means an assessment measures only the intended skill and does so comparably for students from different backgrounds, languages, and abilities–without hidden barriers unrelated to what’s being assessed. Effective AI systems in schools need embedded controls to avoid construct‑irrelevant content: elements that distract from what’s actually being measured.

    For example, a math question shouldn’t hinge on dense prose, niche sports knowledge, or culturally-specific idioms unless those are part of the goal; visuals shouldn’t rely on low-contrast colors that are hard to see; audio shouldn’t assume a single accent; and timing shouldn’t penalize students if speed isn’t the construct.

    To improve fairness and accuracy in assessments:

    • Avoid construct-irrelevant content: Ensure test questions focus only on the skills and knowledge being assessed.
    • Use AI tools with built-in fairness controls: Generic AI models may not inherently understand fairness; choose tools designed specifically for educational contexts.
    • Train AI on expert-authored content: AI is only as fair and accurate as the data and expertise it’s trained on. Use models built with input from experienced educators and psychometricians.

    These subtleties matter. General-purpose AI tools, left untuned, often miss them.

    The risk of relying on convenience

    Educators face immense time pressures. It’s tempting to use AI to quickly generate assessments or learning materials. But speed can obscure deeper issues. A question might look fine on the surface but fail to meet cognitive complexity standards or align with curriculum goals. These aren’t always easy problems to spot, but they can impact student learning.

    To choose the right AI tools:

    • Select domain-specific AI over general models: Tools tailored for education are more likely to produce pedagogically-sound and standards-aligned content that empowers students to succeed. In a 2024 University of Pennsylvania study, students using a customized AI tutor scored 127 percent higher on practice problems than those without.
    • Be cautious with out-of-the-box AI: Without expertise, educators may struggle to critique or validate AI-generated content, risking poor-quality assessments.
    • Understand the limitations of general AI: While capable of generating content, general models may lack depth in educational theory and assessment design.

    General AI tools can get you 60 percent of the way there. But that last 40 percent is the part that ensures quality, fairness, and educational value. This requires expertise to get right. That’s where structured, guided AI becomes essential.

    Building AI that thinks like an educator

    Developing AI for education requires close collaboration with psychometricians and subject matter experts to shape how the system behaves. This helps ensure it produces content that’s not just technically correct, but pedagogically sound.

    To ensure quality in AI-generated content:

    • Involve experts in the development process: Psychometricians and educators should review AI outputs to ensure alignment with learning goals and standards.
    • Use manual review cycles: Unlike benchmark-driven models, educational AI requires human evaluation to validate quality and relevance.
    • Focus on cognitive complexity: Design assessments with varied difficulty levels and ensure they measure intended constructs.

    This process is iterative and manual. It’s grounded in real-world educational standards, not just benchmark scores.

    Personalization needs structure

    AI’s ability to personalize learning is promising. But without structure, personalization can lead students off track. AI might guide learners toward content that’s irrelevant or misaligned with their goals. That’s why personalization must be paired with oversight and intentional design.

    To harness personalization responsibly:

    • Let experts set goals and guardrails: Define standards, scope and sequence, and success criteria; AI adapts within those boundaries.
    • Use AI for diagnostics and drafting, not decisions: Have it flag gaps, suggest resources, and generate practice, while educators curate and approve.
    • Preserve curricular coherence: Keep prerequisites, spacing, and transfer in view so learners don’t drift into content that’s engaging but misaligned.
    • Support educator literacy in AI: Professional development is key to helping teachers use AI effectively and responsibly.

    It’s not enough to adapt–the adaptation must be meaningful and educationally coherent.

    AI can accelerate content creation and internal workflows. But speed alone isn’t a virtue. Without scrutiny, fast outputs can compromise quality.

    To maintain efficiency and innovation:

    • Use AI to streamline internal processes: Beyond student-facing tools, AI can help educators and institutions build resources faster and more efficiently.
    • Maintain high standards despite automation: Even as AI accelerates content creation, human oversight is essential to uphold educational quality.

    Responsible use of AI requires processes that ensure every AI-generated item is part of a system designed to uphold educational integrity.

    An effective approach to AI in education is driven by concern–not fear, but responsibility. Educators are doing their best under challenging conditions, and the goal should be building AI tools that support their work.

    When frameworks and safeguards are built-in, what reaches students is more likely to be accurate, fair, and aligned with learning goals.

    In education, trust is foundational. And trust in AI starts with thoughtful design, expert oversight, and a deep respect for the work educators do every day.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • From improvement to compliance – a significant shift in the purpose of the TEF

    From improvement to compliance – a significant shift in the purpose of the TEF

    The Teaching Excellence Framework has always had multiple aims.

    It was partly intended to rebalance institutional focus from research towards teaching and student experience. Jo Johnson, the minister who implemented it, saw it as a means of increasing undergraduate teaching resources in line with inflation.

    Dame Shirley Pearce prioritised enhancing quality in her excellent review of TEF implementation. And there have been other purposes of the TEF: a device to support regulatory interventions where quality fell below required thresholds, and as a resource for student choice.

    And none of this should ignore its enthusiastic adoption by student recruitment teams as a marketing tool.

    As former Chair and Deputy Chair of the TEF, we are perhaps more aware than most of these competing purposes, and more experienced in understanding how regulators, institutions and assessors have navigated the complexity of TEF implementation. The TEF has had its critics – something else we are keenly aware of – but it has had a marked impact.

    Its benchmarked indicator sets have driven a data-informed and strategic approach to institutional improvement. Its concern with disparities for underrepresented groups has raised the profile of equity in institutional education strategies. Its whole institution sweep has made institutions alert to the consequences of poorly targeted education strategies and prioritised improvement goals. Now, the publication of the OfS’s consultation paper on the future of the TEF is an opportunity to reflect on how the TEF is changing and what it means for the regulatory and quality framework in England.

    A shift in purpose

    The consultation proposes that the TEF becomes part of what the OfS sees as a more integrated quality system. All registered providers will face TEF assessments, with no exemptions for small providers. Given the number of new providers seeking OfS registration, it is likely that the number to be assessed will be considerably larger than the 227 institutions in the 2023 TEF.

    Partly because of the larger number of assessments to be undertaken, TEF will move to a rolling cycle, with a pool of assessors. Institutions will still be awarded three grades – one for outcomes, one for experience and one overall, but their overall grade will simply be the lower of the two other grades. The real impact of this will be on Bronze-rated providers who could find themselves subject to a range of measures, potentially including student number controls or fee constraints, until they show improvement.

    The OfS consultation paper marks a significant shift in the purpose of the TEF, from quality enhancement to regulation and from improvement to compliance. The most significant changes are at the lower end of assessed performance. The consultation paper makes sensible changes to aspects of the TEF which always posed challenges for assessors and regulators, tidying up the relationship between the threshold B3 standards and the lowest TEF grades. It correctly separates measures of institutional performance on continuation and completion – over which institutions have more direct influence – from progression to employment – over which institutions have less influence.

    Pressure points

    But it does this at some heavy costs. By treating the Bronze grade as a measure of performance at, rather than above, threshold quality, it will produce just two grades above the threshold. In shifting the focus towards quantitative indicators and away from institutional discussion of context, it will make TEF life more difficult for further education institutions and institutions in locations with challenging graduate labour markets. The replacement of the student submission with student focus groups may allow more depth on some issues, but comes at the expense of breadth, and the student voice is, disappointingly, weakened.

    There are further losses as the regulatory purpose is embedded. The most significant is the move away from educational gain, and this is a real loss: following TEF 2023, almost all institutions were developing their approaches to and evaluation of educational gain, and we have seen many examples where this was shaping fruitful approaches to articulating institutional goals and the way they shape educational provision.

    Educational gain is an area in which institutions were increasingly thinking about distinctiveness and how it informs student experience. It is a real loss to see it go, and it will weaken the power of many education strategies. It is almost certainly the case that the ideas of educational gain and distinctiveness are going to be required for confident performance at the highest levels of achievement, but it is a real pity that it is less explicit. Educational gain can drive distinctiveness, and distinctiveness can drive quality.

    Two sorts of institutions will face the most significant challenges. The first, obviously, are providers rated Bronze in 2023, or Silver-rated providers whose indicators are on a downward trajectory. Eleven universities were given a Bronze rating overall in the last TEF exercise – and 21 received Bronze either for the student experience or student outcomes aspects. Of the 21, only three Bronzes were for student outcomes, but under the OfS plans, all would be graded Bronze, since any institution would be given its lowest aspect grade as its overall grade. Under the proposals, Bronze-graded institutions will need to address concerns rapidly to mitigate impacts on growth plans, funding, prestige and competitive position.

    The second group facing significant challenges will be those in difficult local and regional labour markets. Of the 18 institutions with Bronze in one of the two aspects of TEF 2023, only three were graded bronze for student outcomes, whereas 15 were for student experience. Arguably this was to be expected when only two of the six features of student outcomes had associated indicators: continuation/completion and progression.

    In other words, if indicators were substantially below benchmark, there were opportunities to show how outcomes were supported and educational gain was developed. Under the new proposals, the approach to assessing student outcomes is largely, if not exclusively, indicator-based, for continuation and completion. The approach is likely to reinforce differences between institutions, and especially those with intakes from underrepresented populations.

    The stakes

    The new TEF will play out in different ways in different parts of the sector. The regulatory focus will increase pressure on some institutions, whilst appearing to relieve it in others. For those institutions operating at 2023 Bronze levels or where 2023 Silver performance is declining, the negative consequences of a poor performance in the new TEF, which may include student number controls, will loom large in institutional strategy. The stakes are now higher for these institutions.

    On the other hand, institutions whose graduate employment and earnings outcomes are strong, are likely to feel more relieved, though careful reading of the grade specifications for higher performance suggests that there is work to be done on education strategies in even the best-performing 2023 institutions.

    In public policy, lifting the floor – by addressing regulatory compliance – and raising the ceiling – by promoting improvement – at the same time is always difficult, but the OfS consultation seems to have landed decisively on the side of compliance rather than improvement.

    Source link