Tag: OfS

  • WEEKEND READING: Three reasons why the TEF will collapse under the weight of OfS  and DfE expectations

    WEEKEND READING: Three reasons why the TEF will collapse under the weight of OfS  and DfE expectations

    Author:
    Professor Paul Ashwin

    Published:

    This blog was kindly authored by Paul Ashwin, Professor of Higher Education, Lancaster University.

    The Office for Students (OfS) and the Department of Education (DfE) have big plans to make the TEF much more consequential. They want future TEF outcomes to determine whether institutions can increase their intake of students and their undergraduate tuition fees in line with inflation, which could mean the difference between survival or merger/closure for many institutions. These plans require that the OfS to show that the TEF provides a credible measure of institutional educational quality, whilst also fulfilling the OfS’s central remit of acting in the interest of students. The OfS consultation on the future approach to quality regulation provides an opportunity to assess the OfS’s latest attempt at such a justification. To say it looks weak is a huge understatement. Rather, unless there is a radical rethink, these proposals will lead to the collapse of the TEF.

    There are three reasons why this collapse would be inevitable.

    Firstly, the TEF provides a broad, if flawed, measure of institutional educational quality. This was fine when the main consequence of a TEF award was the presence or absence of a marketing opportunity for institutions. However, if the TEF has existential consequences for institutions, then a whole series of limitations are suddenly cast in a deeply unflattering spotlight. The most obvious of these is that the TEF uses programme level metrics to make judgements about institutional quality. It is both conceptual and methodological nonsense to attempt to scale-up judgements of quality from the programme to the institutional level in this way, as has been routinely stated in every serious review of the National Student Survey. This didn’t matter too much when the TEF was lacking in teeth, but if it has profound consequences, then why wouldn’t institutions consider legal challenges to this obvious misuse of metrics? This situation is only exacerbated by the OfS’s desire to extend the TEF to all institutions regardless of size. The starkest consequence of this foolhardy venture is that a small provider with insufficient student experience and outcomes data could end up being awarded TEF Gold (and the ability to increase student recruitment and tuition fees in line with inflation) on the basis of a positive student focus group and an institutional statement. How might larger institutions awarded a Bronze TEF react to such obvious unfairness? That the OfS has put itself in this position shows how little it understands the consequences of what it is proposing.

    Second, in relation to the OfS acting in the student interest, things look even worse. As the TEF attempts to judge quality at an institutional level, it does not give any indication of the quality of the particular programme a student will directly experience. As the quality of degree programmes varies across all institutions, students on, for example, a very high quality psychology degree in an institution with TEF Bronze would pay lower tuition fees than students on a demonstrably much lower quality psychology degree in an institution that is awarded TEF Gold. How can this possibly be in the student interest? Things get even worse when we consider the consequences of TEF awards being based on data that will be between four and ten years out of date by the time students graduate. For example, let’s imagine a student who was charged higher tuition fees based on a TEF Gold award, whose institution gets downgraded to a TEF Bronze in the next TEF. Given this lower award would be based on data from the time the student was actually studying at the institution, how, in the name of the student interest, would students not be eligible for a refund for the inflation-linked element of their tuition fee?

    Thirdly, the more consequential that the TEF becomes, the more pressure is put on it as a method of quality assessment. This would have predictable and damaging effects. If TEF panels know that being awarded TEF Bronze could present an existential threat to institutions, then they are likely to be incredibly reluctant to make such an award. It is not clear how the OfS could prevent this without inappropriately and illegitimately intervening in the work of the expert panels.  Also, in the current state of financial crisis, institutional leaders are likely to feel forced to game the TEF. This would make the TEF even less of an effective measure of educational quality and much more of a measure of how effectively institutions can play the system. It is totally predictable that institutions with the greatest resources will be in by far the best position to finance the playing of such games.

    The OfS and DfE seem determined to push ahead with this madness, a madness which incidentally goes completely against the widely lauded recommendations of the TEF Independent Review. Their response to the kinds of issues discussed here appears to be to deny any responsibility by asking, “What’s the alternative?” But there are much more obvious options than using a broad brush mechanism of institutional quality to determine whether an institution can recruit more students and raise its undergraduate tuition fees in line with inflation. For example, it would make more sense and be more transparent to all stakeholders, if these decisions were based on ‘being in good standing’ with the regulator based on a public set of required standards. This would also allow the OfS to take much swifter action against problematic providers than using a TEF-based assessment process. However things develop from here, one thing is certain: if the OfS and DfE cannot find a different way forward, then the TEF will soon collapse under the weight of expectations it cannot possibly meet.

    Source link

  • OfS Access and Participation data dashboards, 2025 release

    OfS Access and Participation data dashboards, 2025 release

    The sector level dashboards that cover student characteristics have a provider-level parallel – the access and participation dashboards do not have a regulatory role but are provided as evidence to support institutions develop access and participation plans.

    Though much A&P activity is pre-determined – the current system pretty much insists that universities work with schools locally and address stuff highlighted in the national Equality of Outcomes Risk Register (EORR). It’s a cheeky John Blake way of embedding a national agenda into what are meant to be provider level plans (that, technically, unlock the ability to charge fees up to the higher level) but it could also be argued that provider specific work (particularly on participation measures rather than access) has been underexamined.

    The A&P dashboards are a way to focus attention on what may end up being institutionally bound problems – the kinds of things that providers can fix, and quickly, rather than the socio-economic learning revolution end of things that requires a radicalised cadre of hardened activists to lead and inspire the proletariat, or something.

    We certainly don’t get any detailed mappings between numeric targets declared in individual plans and the data – although my colleague Jim did have a go at that a while ago. Instead this is just the raw information for you to examine, hopefully in an easier to use and speedier fashion than the official version (which requires a user guide, no less)

    Fun with indicators

    There are four dashboards here, covering most of what OfS presents in the mega-board. Most of what I’ve done examines four year aggregations rather than individual years (though there is a timeseries at provider level), I’ve just opted for the 95 per cent confidence interval to show the significance of indicator values, and there’s a few other minor pieces that I’ve not bothered with or set a sensible default on.

    I know that nobody reads this for data dashboard design tips, but for me a series of simpler dashboards are far more useful to the average reader than a single behemoth that can do anything – and the way HESA presents (in the main) very simple tables or plain charts to illustrate variations across the sector represents to me a gold standard for provider level data. OfS is a provider of official statistics, and as such is well aware that section V3.1 of the code of practice requires that:

    Statistics, data and explanatory material should be relevant and presented in a clear, unambiguous way that supports and promotes use by all types of users

    And I don’t think we are quite there yet with what we have, while the simple release of a series of flat tables might get us closer

    If you like it you should have put a confidence interval on it

    To start with, here is a tool for constructing ranked displays of providers against a single metric – here defined as a life cycle stage (access, continuation, completion, attainment, progression) expressed as a percentage of successful achievements for a given subgroup.

    Choose your split indicator type on the top right, and the actual indicator on the top right – select the life cycle stage on the box in the middle, and set mode and level (note certain splits and stages may only be available for certain modes and levels). You can highlight a provider of interest using the box on the bottom right, and also find an overall sector average by searching on “*”. The colours show provider group, and the arrows are upper and lower confidence bounds at the standard 95 per cent level.

    You’ll note that some of the indicators show intersections – with versions of multiple indicators shown together. This allows you to look at, say, white students from a more deprived background. The denominator in the tool tip is the number students in that population, not the number of students where data is available.

    [singles rank]

    I’ve also done a version allowing you to look at all single indicators at a provider level – which might help you to spot particular outliers that may need further analysis. Here, each mark is a split indicator (just the useful ones, I’ve omitted stuff like “POLAR quintiles 1,2,4, and 5” which is really only worth bothering with for gap analysis), you can select provider, mode, and level at the top and highlight a split group (eg “Age (broad)”) or split (eg “Mature aged 21 and over”).

    Note here that access refers to the proportion of all entrants from a given sub-group, so even though I’ve shown it on the same axis for the sake of space it shows a slightly different thing – the other lifecycle stages relate to a success (be that in continuation, progression or whatever) based on how OfS defines “success”.

    [singles provider]

    Oops upside your head

    As you’ve probably spotted from the first section, to really get things out of this data you need to compare splits with other relevant splits. We are talking, then, about gaps – on any of the lifecycle stages – between two groups of students. The classic example is the attainment gap between white and Black students, but you can have all kinds of gaps.

    This first one is across a single provider, and for the four lifecycle stages (this time, we don’t get access) you can select your indicator type and two indicators to get the gap between them (mode, and level, are at the bottom of the screen). When you set your two split, the largest or most common group tends to be on indicator 1 – that’s just the way the data is designed.

    [gaps provider]

    As a quick context you can look for “*” again on the provider name filter to get sector averages, but I’ve also built a sector ranking to help you put your performance in context with similar providers.

    This is like a cross between the single ranking and the provider-level gaps analysis – you just need to set the two splits in the same way.

    [gaps rank]

    Sign o’ the times

    The four year aggregates are handy for most applications, but as you being to drill in you are going to start wondering about individual years – are things getting gradually worse or gradually better? Here I’ve plotted all the individual year data we get – which is, of course, different for each lifecycle stage (because of when data becomes available). This is at a provider level (filter on the top right) and I’ve included confidence intervals at 95 per cent in a lighter colour.

    [gaps provider timeseries]

    Source link

  • OfS characteristics dashboards, 2025 release

    OfS characteristics dashboards, 2025 release

    The Office for Students releases a surprisingly large amount of data for a regulator that is supported by a separate “designated data body”.

    Some of it is painfully regulatory in nature – the stuff of nightmares for registrars and planning teams that are not diligently pre-preparing versions of the OfS’ bespoke splits in real time (which feels like kind of a burden thing, but never mind).

    Other parts of it feel like they might be regulatory, but are actually descriptive. No matter how bad your provider looks on any of the characteristics, or access and participation, indicators it is not these that spark the letter or the knock on the door. But they still speak eloquently about the wider state of the sector, and of particular providers within it.

    Despite appearances, it is this descriptive data that is likely to preoccupy ministers and policymakers. It tells us about the changing size and shape of the sector, and of the improvement to life chances it does and does not offer particular groups of students.

    Outcomes characteristics

    How well do particular groups of students perform against the three standard OfS outcomes measures (continuation, completion, progression) plus another (attainment) that is very much in direct control of individual providers?

    It’s a very pertinent question given the government’s HE Reform agenda language on access and participation – and the very best way to answer it is via an OfS data release. Rather than just the traditional student characteristics – age, ethnicity, the various area based measures – we get a range of rarities: household residual income, socioeconomic status, parental higher education experience. And these come alongside greatly expanded data on ethnicity (15 categories) and detail on age.

    Even better, as well as comparing full time and part-time students, we can look at the performance of students by detailed (or indeed broad) subject areas – and at a range of levels of study.

    We learn that students from better off (residual income at £42,601 or greater) are more likely to progress to a positive outcome – but so are students of nursing. Neither of these at the level of medical students, or distance learning students – but very slightly above Jewish students. The lowest scoring group on progression is currently students taught via subcontractual arrangements – but there are also detriments for students with communication-related disabilities, students from Bangladeshi backgrounds, and students with “other” sexual orientations.

    In some cases there are likely explanatory factors and probably intersections – in others it is anyone’s guess. Again and again, we see a positive relationship between parental income or status and doing well at higher education: but it is also very likely that progression across the whole of society would show a similar pattern.

    On this chart you can select your lifecycle stage on the top left-hand side, and use the study characteristics drop down to drill into modes of study or subject – there’s also an ability to exclude sub-contractual provision outside of registered provider via the population filter. At the bottom you can set domicile (note that most characteristics are available only for UK students) and level of study (again note that some measures are limited to undergraduates). The characteristics themselves are seen as the individual blobs for each year: mouse over to find similar blobs in other years or use the student characteristic filter or sub-characteristic highlighter to find ones that you want.

    [Full screen]

    The “attainment” life cycle stage refers to the proportion of undergraduate qualifiers that achieve a first or upper second for their first degree. It’s not something we tend to see outside of the “unexplained first” lens, and it is very interesting to apply the detailed student characteristics to what amounts to awarding rates.

    It remains strikingly difficult to achieve a first or upper second while being Black. Only 60 per cent of UK full time first degree students managed this in 2023-24 which compares well to nearer 50 per cent a decade ago, but not so well with the 80 per cent of their white peers. The awarding gap remains stark and persistent.

    Deprivation appears to be having a growing impact on continuation – again for UK full time first degree students, the gap between the most (IMD Q1, 83.3 per cent) and least (Q5 93.1 per cent) deprived backgrounds has grown in recent years. And the subject filters add another level of variation – in medicine the different is tiny, but in natural sciences it is very large.

    Population characteristics

    There are numerators (number of students where data is included) and denominators (number of students with those characteristics) within the outcomes dashboard, but sometimes we just need to get a sense of the makeup of the entire sector – focusing on entrants, qualifiers, or all students.

    We learn that nearly 10 per cent of UK first degree students are taught within a subcontractual arrangement – rising to more than 36 per cent in business subjects. Counter-intuitively, the proportion of UK students studying other undergraduate courses (your level 4 and 5 provision) has fallen in previous years – 18 per cent of these students were taught via sub contractual arrangements in 2010, and just 13 per cent (of a far lower total) now. Again, the only rise is in business provision – sub-contractual teaching is offered to nearly a quarter of non-degree undergraduates from UK domiciles there.

    More than a third (33.14 per cent) of UK medicine or dentistry undergraduates are from managerial or professional backgrounds, a higher proportion than any other subject area, even though this has declined slightly in recent years.

    Two visualisations here – the first shows student characteristics as colours on the bars (use the filter at the top) and allows you to filter what you see by mode or subject area using the filters on the second row. At the bottom you can further filter by level of study, domicile, or population (all, entrants, or qualifiers). The percentages include students where the characteristic is “not applicable” or where there is “no response” – this is different from (but I think clearer than) the OfS presentation.

    [by student characteristic]

    The second chance puts subject or mode as the colours, and allows you to look at the make up of particular student characteristic groups on this basis. This is a little bit of a hack, so you need to set the sub characteristic as “total” in order to alter the main characteristic group.

    [by study characteristic]

    Entry qualification and subject

    Overall, UK undergraduate business students are less likely to continue, complete, attain a good degree, or a positive progression outcome than their peers in any other subject area – and this gap has widened over time. There is now a 1.5 percentage point progression gap between business students and creative or performing arts students: on average a creative degree is more likely to get you into a job or further study than one in business, and this has been the case since 2018.

    And there is still a link between level 3 qualifications and positive performance at every point of the higher education life cycle. The data here isn’t perfect – there’s no way to control for the well documented link between better level 3 performance (more As and A*s, less Cs, Ds and BTECs) and socioeconomic status or disadvantage. Seventy two per cent of the best performing BTEC students were awarded a first or upper second, 96 per cent of the best performing A level students.

    This is all taken from a specific plot of characteristics (entry qualification and subject) data – unfortunately for us it contains information on those two topic only, and you can’t even cross plot them.

    [Full screen]

    What OfS makes of all this

    Two key findings documents published alongside this release detail the regulatory observations. The across-the-board decline in continuation appears to have been halted, with an improvement in 2022-23 – but mature entrants are still around 9 percentage points less likely to continue.

    We get recognition of the persistent gap in performance at all levels other than progression between women (who tend to do better) and men (who tend to do worse). And of the counterintuitive continuation benefits experienced by disabled students. And we do get a note on the Black attainment gap I noted above.

    Again, this isn’t the regulatory action end of OfS’ data operations – so we are unlikely to see investigations or fines related to particularly poor performance on some of these characteristics within individual providers. Findings like these at a sector level suggest problems at a structural rather than institutional level, and as is increasingly being made plain we are not really set up to deal with structural higher education issues in England – indeed, these two reports and millions of rows of data do not even merit a press release.

    We do get data on some of these issues at provider level via the access and participation dashboards, and we’ll dive into those elsewhere.

    Source link

  • OfS rebalances the free speech/harassment see-saw on antisemitism

    OfS rebalances the free speech/harassment see-saw on antisemitism

    The Union of Jewish Students (UJS) has published a fascinating new episode of “Yalla”, its podcast for Jewish students.

    Hosted by Louis Danka, who is the new President of UJS, the September 2025 episode features an extensive interview with Arif Ahmed, OfS’ Director for Freedom of Speech and Academic Freedom.

    The conversation comes weeks after the regulator’s new higher education free speech guidance came into force on August 1, 2025, alongside enhanced harassment protections.

    What makes the interview especially interesting is what it doesn’t mention – Ahmed’s reversal on the IHRA definition of antisemitism.

    In February 2021, Ahmed wrote in a HEPI blog that he was strongly against Gavin Williamson’s requirement that universities adopt the IHRA definition of anti-Semitism, arguing it obstructs perfectly legitimate defence of Palestinian rights and chills free speech:

    I hope the Secretary of State reconsiders the need for it; but these new free speech duties ought to rule it out in any case.

    We’re all allowed to change our minds on things. The issue is the extent to which the law, or the regulation he’s now in charge of, offers clarity on the volte-face.

    And while there’s plenty of helpful material in there on how OfS might approach casework and complaints, it does raise all sorts of questions about expectations – and OfS’ strategy for communicating what in some cases amounts to significant additions and clarifications to its guidance.

    What the podcast says

    The interview centres on what I’ve previously described as the twin sandbags on the regulatory see-saw – the Higher Education (Freedom of Speech) Act 2023 and the E6 condition on harassment and sexual misconduct.

    A central theme throughout is UJS’ contention of a deteriorating campus environment for Jewish students. Ahmed acknowledges there has been “a big rise in antisemitic incidents in recent years, on campus, in the country more generally” and describes this as a source of “grave concern” for OfS.

    The discussion then considers how this manifests practically on campuses. Ahmed describes, for example, scenarios where Jewish students may feel unable to attend lectures due to protest activity, or where “protests outside Jewish accommodation” create hostile environments.

    He first emphasises that while “political ideas expressed in the protests may be perfectly awful and expressible,” universities can still regulate their “time, place and manner” – such that core functions can keep going.

    Hence on protest regulation, Ahmed says:

    …if you have protests that take place in such a way that Jewish students don’t feel able to attend lectures … it may also be right for the university say, well, you can’t do it here, and you can’t do it in this place, and you can’t have it every day outside a lecture theatre.

    He also points to protests outside Jewish accommodation as another context where restrictions could be justified.

    Ahmed’s contemporary position on IHRA is explained as follows:

    …we ourselves have adopted the IHRA definition, and we do think it can be a very useful tool for understanding modern antisemitism.

    He adds that there is “no obstacle, in principle” for universities adopting a particular definition, and “certainly not” the IHRA working definition.

    He clarifies that it is “absolutely compatible” with the guidance, provided it’s being used properly as a way to understand antisemitism rather than to suppress lawful and legitimate debate. That latter caveat may represent the only vestige of his previous concerns about IHRA chilling Palestinian rights advocacy.

    The published guidance takes an uncompromising stance on Holocaust denial, where Ahmed explains this was made explicit after consultation feedback seeking clarity:

    …we will not under any circumstances protect Holocaust denial, so nothing that we do in our complaint scheme or otherwise will protect that speech.

    With the more obvious stuff out of the way, the subsequent nuanced discussion involves distinguishing between legitimate political discourse and antisemitic harassment, particularly around coded language.

    Ahmed addresses scenarios like “Zionists off our campus” signs, explaining that context is crucial. On coded antisemitism, Ahmed explains:

    …very often when people use the expression “Zionist”, for instance, it can actually be used as a kind of euphemistic expression meaning Jewish people, and in the circumstances where that’s so it seems very much more likely to be something that’s targeted at individuals because of their race, because of their religion.

    He then distinguishes between attacking ideas versus targeting individuals, noting that speech “directed at ideas” differs from speech that makes individuals feel excluded because of their protected characteristic.

    Ahmed is at pains to point out that freedom of speech encompasses religious expression, making Jewish students’ ability to practice their faith a free speech issue. He also describes scenarios where Jewish students might hide religious symbols like Stars of David due to campus hostility. He then explains the religious expression dimension:

    …if you have an atmosphere on campus which is allowed to grow, which grew, Jewish students are intimidated out of expressing their own religion, that’s that’s an affront to their freedom of speech.

    The interview also explores “chilling effects” – where students self-censor rather than face consequences. Ahmed describes situations where students with pro-Israel views or Jewish religious expression might “decide not to say it in the first place” due to fears about academic consequences or social ostracism.

    Nevertheless, he repeatedly stresses that harassment determinations require objective analysis, not just subjective feelings. He explains that the legal test involves whether:

    …a reasonable person would think that was… creating an intimidating atmosphere for people because of their race, because of their religion.

    And on that point:

    …it’s not enough, for speech to count as harassment, that the person at the receiving end feels offended; what’s important is that a reasonable person would think that was so.

    He concludes by stressing that freedom of speech “historically… most protects minorities and those… for whom their voice and their words are the only things that they have.”

    What the papers say

    The Jewish News coverage of Ahmed’s podcast exemplifies how the reassuring rhetoric translates into heightened community expectations.

    The headline itself – “Free speech tsar tells universities: stop intimidation of Jewish students” – frames Ahmed’s nuanced legal discussion as a clear directive for immediate action.

    The article’s language amplifies Ahmed’s confidence, presenting his tentative statements (“it may also be right for the university to say”) as firm commitments (“universities must take firm steps”) and his regulatory expectations (“we would expect universities to take action”) as binding obligations.

    The coverage also amps up specific protections – Jewish students’ ability to attend lectures, enter accommodation, and express their religion – without conveying the complex legal determinations universities might need to navigate to provide that protection.

    Ahmed’s discussion of “coded language” becomes a promise that universities can identify and restrict antisemitic euphemisms, while his IHRA compatibility statements are presented as resolving rather than acknowledging ongoing tensions between free speech and antisemitism prevention.

    Most tellingly, UJS President Louis Danker’s response reveals both the raised expectations and their fragility. While expressing satisfaction that “the Office for Students shares our concerns,” he acknowledges that “the ambiguity of the guidance will be challenged by crucial test cases in the coming months.”

    This tension, between reassurance about shared concerns and worry about guidance ambiguity, captures the potential problem that OfS has created – confident promises built on uncertain legal foundations that will inevitably face testing in precisely the complex scenarios that the framework struggles to address.

    What the podcast doesn’t say

    The central question is whether the reassuring statements to Jewish students align with what universities can actually deliver under existing legal frameworks.

    If we take holocaust denial, for example, Ahmed demonstrates clear understanding:

    Article 17 says that none of these rights can be used, essentially to destroy other people’s rights. So speech that aims to destroy others rights… the courts have found, for instance, that many instances of Holocaust denial they’ve looked at, fall under it.

    That explanation appears to be legally accurate. Article 17 is sometimes called the “abuse clause” of the European Convention – it strips protection from speech that aims to destroy the rights of others, such as Holocaust denial.

    But the guidance leaves the explanation out, simply declaring Holocaust denial unprotected without explaining why – or when. That omission matters, because Article 17 normally operates alongside Article 10(2) – another part of the Convention that allows restrictions on speech if they are necessary and proportionate to protect others.

    As a reminder, the OfS guidance’s three-step framework treats human rights considerations as sequential rather than integrated:

    Step 1 asks simply whether speech is “within the law” – defined as speech not prohibited by primary legislation, legal precedent, or secondary legislation. Crucially, university regulations and contracts don’t count as “law” here. If not, don’t allow it. If it is, move to Step 2.

    Step 2 considers whether there are “reasonably practicable steps” to enable the speech. Universities should consider factors like legal requirements (including any formal duties), maintaining essential functions, and physical safety – but explicitly cannot consider the viewpoint expressed, whether it’s controversial, or reputational impact. If steps can be taken, take them. If not, move to Step 3.

    Step 3 – which is only reached if no reasonably practicable steps exist – then asks whether any restrictions are “prescribed by law” and proportionate under the European Convention. This involves checking if there’s a specific legal rule authorising the restriction, and runs through a four-part proportionality test weighing the importance of the objective against the severity of limiting the right.

    That proportionality test looks something like this:

    • Is the objective important enough? – The reason for restricting speech must be sufficiently weighty to justify limiting a fundamental right.
    • Is the restriction rationally connected? – The measure must actually help achieve the objective, not just be vaguely related to it.
    • Is this the least restrictive option? – Could you achieve the same goal with less impact on free speech? If yes, you must use the less intrusive approach.
    • Does the benefit outweigh the harm? – Even if the first three tests are met, you must still balance the severity of restricting speech against how much the restriction actually helps achieve your objective. The restriction fails if the damage to free expression outweighs the benefit gained.

    As I’ve noted before on here, the published approach seems to conflict with Minasyan v Armenia (2019), where the European Court of Human Rights struck down Armenia’s handling of a protest case. The Armenian courts had first checked whether protesters broke domestic criminal law, and only afterwards asked whether their free speech rights were engaged.

    Strasbourg was clear – you can’t separate those steps. The balancing of rights under Article 10(2) has to be done at the outset. So when the guidance asks universities to check domestic criminal law first and only consider broader human rights implications as an afterthought, the sequential framework seems to repeat the flaw that Strasbourg condemned.

    Meanwhile, Ahmed seems to correctly state the objective harassment test:

    …it’s not enough. Speech to count as harassment, that the person at the receiving end feels offended… what’s important is that a reasonable person would think that was so.

    But his practical applications consistently reference subjective experiences without clear frameworks for objective assessment. He discusses Jewish students feeling “unable to attend lectures” or “intimidated out of expressing their own religion” – but then offers up little on how universities should distinguish between justified concerns and unfounded complaints.

    The “reasonable person” test sounds simple, but in practice it is one of the hardest questions courts face. Would an average observer see this as harassment, taking into account context, repetition, and effect?

    Judges often split over the answer, even with days of evidence. Expecting university or SU staff to make that call in real time, during a protest or at a room-booking stage, is asking staff to perform complex human rights analyses on the fly. Clarity on what he might expect is reasonable in those scenarios would help.

    Ahmed’s discussion of antisemitic language also illustrates the analytical burden placed on those enforcing or explaining rules day to day:

    …very often when people use the expression Zionist, for instance, they can actually, can actually be used as a kind of euphemistic expression meaning Jewish people.

    Determining when “Zionist” functions as coded antisemitism requires careful analysis of speaker intent, contextual factors, and impact on targeted individuals. These are determinations that typically require evidence about speaker’s background and previous statements, analysis of the specific context and setting, an assessment of audience understanding and reaction, and an evaluation of the targeting effects on specific individuals.

    Day to day, staff may well lack both the investigative capacity and legal expertise to perform those sorts of analyses reliably. Ahmed acknowledges the complexity – “it might depend on context” – but doesn’t offer anything like a practical methodology for making the determinations.

    The UK Supreme Court in Elan-Cane (2021) stressed that domestic bodies should not push human rights analysis beyond what the European Court of Human Rights has already recognised. Lord Reed warned against overstepping into areas Strasbourg had not yet endorsed.

    Ahmed’s framework arguably asks universities to do exactly that – making human rights calls (on protests, coded language, or harassment) that even the courts approach with extreme caution.

    If legally trained judges with full procedural protections must be cautious about extending human rights analysis, how can staff be expected to perform similar determinations through internal processes? Is OfS fit to do so when it gets a complaint in? And what are the penalties for getting it wrong?

    Rights collision

    Another silence in the interview is how to handle the collision of rights. He clearly anchors harassment to protected characteristics like race and religion, and he treats Zionism as an idea that can be lawfully discussed – while warning it is sometimes used as a euphemism for “Jew” in context. He doesn’t quite say “Zionism is a protected belief” in terms, though that would be the likely legal position under Equality Act case law. The same goes for anti-Zionism.

    Under UK equality law, political and philosophical beliefs qualify for protection if they meet what’s known as the Grainger criteria – that is, the belief must be genuinely held, relate to a weighty aspect of human life, attain a certain level of seriousness and cogency, and be worthy of respect in a democratic society.

    Courts have already recognised beliefs such as environmentalism, gender-critical feminism, and ethical veganism under this test. Anti-Zionism looks like it would qualify on the same basis, provided it is expressed as a coherent political or philosophical position rather than as a proxy for antisemitism.

    What he does not explain is what universities should do when the protections appear to come into direct conflict or quite how a university is supposed to differentiate between the political or philosophical position and the proxy.

    Let’s imagine a student holding a placard reading “Zionism is racism” and another responding that “anti-Zionism is antisemitism.” Both statements can amount to the expression of protected beliefs under the Equality Act. Both students might also claim they are being harassed by the other.

    Courts take weeks to sift through context, intent, and impact in such cases – weighing not just Article 10 free speech but also Article 9 (religion), Article 8 (private life) and Article 11 (assembly).

    On balance, “Zionists off campus” feels like it targets a group of people. Those banned from painting it on a banner may feel their speech is being chilled. “Zionism off campus” feels more like a protected piece of political expression. Some reading that may feel harassed. Complaints in either event are likely.

    Recent cases show how fraught these clashes can be. In Forstater v CGD Europe, the tribunal upheld that gender-critical beliefs were protected, even though many found them offensive – but also emphasised that protection for a belief does not mean protection for every manifestation of it.

    In Mackereth v DWP, the tribunal held that a doctor’s refusal to use trans patients’ pronouns could lawfully be limited, despite his Christian beliefs being protected. The principle is clear – both Zionism and anti-Zionism can be protected, but the way they are expressed may still lawfully be restricted if it harasses others.

    What’s missing from Ahmed’s account is the extent to which universities are expected to perform that fine distinction in real time, and at which stage of a process they’re expected to do so.

    What now?

    The danger in all of this is a form of regulatory false advertising – promising protection through frameworks that universities cannot properly execute without risking legal challenge or practical failure.

    The focus on context is welcome, but it doesn’t solve the core problem – the absence of a practical framework for when and how to balance competing rights. Without it, institutions risk inconsistency, overreach, or paralysis – either censoring lawful political expression or failing to protect students from harassment.

    The reassuring tone also suggests clearer legal boundaries than actually exist. When he says that universities “would expect to take action” about intimidatory speech, he presents complex, fact-specific determinations as straightforward administrative decisions.

    It’s a false certainty that may mislead universities into thinking they have clear authority to restrict speech, and could simultaneously raise student expectations about protection that may prove impossible to deliver.

    Then the style compounds the problem. In the podcast and coverage of it, Jewish students hear confident reassurances; in the consultation response annex, Article 17 is quietly acknowledged; in public guidance, proportionality is all but absent from the “within the law” test.

    The impression is of a regulator telling each audience what it wants to hear by pointing at one end of the see-saw, rather than grappling with the hard edges of the case law in ways that may temper expectations rather than raise them.

    And given both the free speech guidance and the E6 guidance drives home the need to get these messages into the heads of students themselves, there’s certainly nothing in there on how universities are supposed to explain all of this to students.

    It leaves universities (and by proxy their SUs) stuck in the impossible position that they have been for months.

    They remain caught between those heavy sandbags without mechanisms to resolve them, having expectations raised on both ends in ways that may not be as simple in practice, and offering little confidence that a good stab at making the calls, carried out in good faith, will result in anything other than Kafka’s regulator appearing with a fine either way.

    Source link

  • What OfS’ data on harassment and sexual misconduct doesn’t tell us

    What OfS’ data on harassment and sexual misconduct doesn’t tell us

    New England-wide data from the Office for Students (OfS) confirms what we have known for a long time.

    A concerningly high number of students – particularly LGBTQ+ and disabled people, as well as women – are subjected to sexual violence and harassment while studying in higher education. Wonkhe’s Jim Dickinson reviews the findings elsewhere on the site.

    The data is limited to final year undergraduates who filled out the National Student Survey, who were then given the option to fill out this further module. OfS’ report on the data details the proportion of final year students who experienced sexual harassment or violence “since being a student” as well as their experiences within the last 12 months.

    It also includes data on experiences of reporting, as well as prevalence of staff-student intimate relationships – but its omission of all postgraduate students, as well as all undergraduates other than final year students means that its findings should be seen as one piece of a wider puzzle.

    Here, I try to lay out a few of the other pieces of the puzzle to help put the new data in context.

    The timing is important

    On 1st August 2025 the new condition of registration for higher education providers in England came into force, which involves regulatory requirements for all institutions in England to address harassment and sexual misconduct, including training for all staff and students, taking steps to “prevent abuses of power” between staff and students, and requiring institutions to publish a “single, comprehensive source of information” about their approach to this work, including support services and handling of reports.

    When announcing this regulatory approach last year, OfS also published two studies published in 2024 – a pilot prevalence survey of a small selection of English HEIs, as well as a ‘poll’ of a representative sample of 3000 students. I have discussed that data as well as the regulation more generally elsewhere.

    In this year’s data release, 51,920 students responded to the survey with an overall response rate of 12.1 per cent. This is significantly larger sample size than both of the 2024 studies, which comprised responses from 3000 and 5000 students respectively.

    This year’s survey finds somewhat lower prevalence figures for sexual harassment and “unwanted sexual contact” than last year’s studies. In the new survey, sexual harassment was experienced by 13.3 per cent of respondents within the last 12 months (and by 24.5 per cent since becoming a student), while 5.4 per cent of respondents had been subjected to unwanted sexual contact or sexual violence within the last 12 months (since becoming a student, this figure rises to 14.1 per cent).

    By any measure, these figures represent a very concerning level of gender-based violence in higher education populations. But if anything, they are at the lower end of what we would expect.

    By comparison, in OfS’ 2024 representative poll of 3000 students, over a third (36 per cent) of respondents had experienced some form of unwanted sexual contact since becoming a student with a fifth (21 per cent) stating the incident(s) happened within the past year. 61 per cent had experienced sexual harassment since being a student, and 43 per cent of the total sample had experienced this in the past year.

    The lower prevalence in the latest dataset could be (in part) because it draws on a population of final year undergraduate students – studies from the US have repeatedly found that first year undergraduate students are at the greatest risk, especially when they start their studies.

    Final year students may simply have forgotten – or blocked out – some of their experiences from first year, leading to lower prevalence. They may also have dropped out. The timing of the new survey is also important – the NSS is completed in late spring, while we would expect more sexual harassment and violence to occur when students arrive at university in the autumn.

    A study carried out in autumn or winter might find higher prevalence. Indeed, the previous two studies carried out by OfS involved data collected at different times to year – in August 2023 (for the 3000-strong poll) and ‘autumn 2023’ (for the pilot prevalence study).

    A wide range of prevalence

    Systematic reviews published in 2023 from Steele et al and Lagdon et al from across the UK, Ireland and the US have found prevalence rates of sexual violence between 7 per cent to 86 per cent.

    Steele et al.’s recent study of Oxford University found that 20.5 per cent of respondents had experienced at least one act of attempted or forced sexual touching or rape, and 52.7 per cent of respondents experienced at least one act of sexual harassment within the past year.

    Lagdon et al.’s study of “unwanted sexual experiences” in Northern Ireland found that a staggering 63 per cent had been targeted. And my own study of a UK HEI found that 30 per cent of respondents had been subjected to sexual violence since enrolling in their university, and 55 per cent had been subjected to sexual harassment.

    For now, I don’t think it’s helpful to get hung up on comparing datasets between last year and this year that draw on somewhat different populations. It’s also not necessarily important that respondents were self-selecting within those who filled out the NSS – a US study compared prevalence rates for sexual contact without consent among students between a self-selecting sample and a non-self-selecting sample, finding no difference.

    The key take-home message is that students are being subject to a significant level of sexual harassment and violence, and particularly women, LGBTQ+ and disabled students are unable to access higher education in safety.

    Reporting experiences

    The findings on reporting reveals some important challenges for the higher education sector. According to the OfS new survey findings, rates of reporting to higher education institutions remain relatively low at 13.2 per cent of those experiencing sexual harassment, and 12.7 per cent of those subjected to sexual violence.

    Of students who reported to their HEI, only around half of rated their experience as “good”. But for women as well as for disabled and LGBTQ+ students there were much lower rates of satisfaction with reporting than men, heterosexuals and non-disabled students who reported incidents to their university.

    This survey doesn’t reveal why students were rating their reporting experiences as poor, but my study Higher Education After #MeToo sheds light on some of the reasons why reporting is not working out for many students (and staff).

    At the time of data collection in 2020-21, a key reason was that – according to staff handling complaints – policies in this area were not yet fit for purpose. It’s therefore not surprising that reporting was seen as ineffective and sometimes harmful for many interviewees who had reported. Four years on, hopefully HEIs have made progress in devising and implementing policies in this area, so other reasons may be relevant.

    A further issue focused on by my study is that reporting processes for sexual misconduct in HE focus on sanctions against the reported party rather than prioritising safety or other needs of those who report. Many HEIs do now have processes for putting in place safety (“precautionary” or “interim”) measures to keep students safe after reporting.

    Risk assessment practices are developing. But these practices appear to be patchy and students (and staff) who report sexual harassment or violence are still not necessarily getting the support they need to ensure their safety from further harm. Not only this, but at the end of a process they are not usually told the actions that their university has taken as a result of the report.

    More generally, there’s a mismatch between why people report, and what is on offer from universities. Forthcoming analysis of the Power in the Academy data on staff-student sexual misconduct reveals that by the time a student gets to the point of reporting or disclosing sexual misconduct from faculty/staff to their HEI, the impacts are already being felt more severely than those who do not report.

    In laywoman’s terms, if people report staff sexual misconduct, it’s likely to be having a really bad impact on their lives and/or studies. Reasons for reporting are usually to protect oneself and others and to be able to continue in work/study. So it’s crucial that when HEIs receive reports, they are able to take immediate steps to support students’ safety. If HEIs are listening to students – including the voices of those who have reported or disclosed to their institution – then this is what they’ll be hearing.

    Staff-student relationships

    The survey also provides new data on staff-student intimate relationships. The survey details that:

    By intimate relationship we mean any relationship that includes: physical intimacy, including one-off or repeated sexual activity; romantic or emotional intimacy; and/or financial dependency. This includes both in person and online, or via digital devices.

    From this sample, 1.5 per cent of respondents stated that they had been in such a relationship with a staff member. Of those who had been involved in a relationship, a staggering 68.8 per cent of respondents said that the university or college staff member(s) had been involved with their education or assessment.

    Even as someone who researches within this area, I’m surprised by how high both these figures are. While not all students who enter into such relationships or connections will be harmed, for some, deep harms can be caused. While a much higher proportion of students who reported “intimate relationships” with staff members were 21 or over, age of the student is no barrier to such harms.

    It’s worth revisiting some of the findings from 2024 to give some context to these points. In the 3000-strong representative survey from the OfS, a third of those in relationships with staff said they felt pressure to begin, continue or take the relationship further than they wanted because they were worried that refusing would negatively impact them, their studies or career in some way.

    Even consensual relationships led to problems when the relationship broke up. My research has described the ways in which students can be targeted for “grooming” and “boundary-blurring” behaviours from staff. These questions on coercion from the 2024 survey were omitted from the shorter 2025 version – but assuming such patterns of coercion are present in the current dataset, these findings are extremely concerning.

    They give strong support to OfS’ approach towards staff-student relationships in the new condition of registration. OfS has required HEIs to take “one or more steps which could make a significant and credible difference in protecting students from any actual or potential conflict of interest and/or abuse of power.”

    Such a step could include a ban on intimate personal relationships between relevant staff and students but HEIs may instead chose to propose other ways to protect students from abuses of power from staff. While most HEIs appear to be implementing partial bans on such relationships, some have chosen not to.

    Nevertheless, all HEIs should take steps to clarify appropriate professional boundaries between staff and students – which, as my research shows, students themselves overwhelmingly want.

    Gaps in the data

    The publication of this data is very welcome in contributing towards better understanding patterns of victimisation among students in HE. It’s crucial to position this dataset within the context of an emerging body of research in this area – both the OfS’ previous publications, but also academic studies as outlined above – in order to build up a more nuanced understanding of students’ experiences.

    Some of the gaps in the data can be filled from other studies, but others cannot. For example, while the new OfS regulatory condition E6 covers harassment on the basis of all protected characteristics, these survey findings focus only on sexual harassment and violence.

    National data on the prevalence of racial harassment or on harassment on the basis of gender reassignment would be particularly valuable in the current climate. This decision seems to be a political choice – sexual harassment and violence is a focus that both right- and left-wing voices can agree should be addressed as a matter of urgency, while it is more politically challenging (and therefore, important) to talk about racial harassment.

    The data also omits stalking and domestic abuse, which young people – including students – are more likely than other age groups to be subjected to, according to the Crime Survey of England and Wales. My own research found that 26 per cent of respondents in a study of gender-based violence at a university in England in 2020 had been subjected to psychological or physical violence from a partner.

    It does appear that despite the narrow focus on sexual harassment and violence from the OfS, many HEIs are taking a broader approach in their work, addressing domestic abuse and stalking, as well as technology-facilitated sexual abuse.

    Another gap in the data analysis report from the OfS is around international students. Last year’s pilot study of this survey included some important findings on their experiences. International students were less likely to have experienced sexual misconduct in general than UK-domiciled students, but more likely to have been involved in an intimate relationship with a member of staff at their university (2 per cent of international students in contrast with 1 per cent of UK students).

    They were also slightly more likely to state that a staff member had attempted to pressured them into a relationship. Their experiences of accessing support from their university were also poorer. These findings are important in relation to any new policies HEIs may be introducing on staff-student relationships: as international students appear to be more likely to be targeted, then communications around such policies need to be tailored to this group.

    We also know that the same groups who are more likely to be subjected to sexual violence/harassment are also more likely to experience more harassment/violence, i.e. a higher number of incidents. The new data from OfS do not report on how many incidents were experienced. Sexual harassment can be harmful as a one-off experience, but if someone is experiencing repeated harassment or unwanted sexual contact from one or more others in their university environment (and both staff and student perpetrators are likely to be carry out repeated behaviours), then this can have a very heavy impact on those targeted.

    The global context

    Too often, policy and debate in England on gender-based violence in higher education fails to learn from the global context. Government-led initiatives in Ireland and Australia show good practice that England could learn from.

    Ireland ran a national researcher-led survey of staff as well as students in 2021, due to be repeated in 2026, producing detailed data that is being used to inform national and cross-institutional interventions. Australia has carried out two national surveys – in 2017 and 2021 – and informed by the results has just passed legislation for a mandatory National Higher Education Code to Prevent and Respond to Gender-based Violence.

    The data published by OfS is much more limited than these studies from other contexts in its focus on third year undergraduate students only. It will be imperative to make sure that HEIs, OfS, government or other actors do not rely solely on this data – and future iterations of the survey – as a tool to direct policy, interventions or practice.

    Nevertheless, in the absence of more comprehensive studies, it adds another piece to the puzzle in understanding sexual harassment and violence in English HE.

    Source link

  • OfS’ understanding of the student interest requires improvement

    OfS’ understanding of the student interest requires improvement

    When the Office for Students’ (OfS) proposals for a new quality assessment system for England appeared in the inbox, I happened to be on a lunchbreak from delivering training at a students’ union.

    My own jaw had hit the floor several times during my initial skim of its 101 pages – and so to test the validity of my initial reactions, I attempted to explain, in good faith, the emerging system to the student leaders who had reappeared for the afternoon.

    Having explained that the regulator was hoping to provide students with a “clear view of the quality of teaching and learning” at the university, their first confusion was tied up in the idea that this was even possible in a university with 25,000 students and hundreds of degree courses.

    They’d assumed that some sort of dashboard might be produced that would help students differentiate between at least departments if not courses. When I explained that the “view” would largely be in the form of a single “medal” of Gold, Silver, Bronze or Requires improvement for the whole university, I was met with confusion.

    We’d spent some time before the break discussing the postgraduate student experience – including poor induction for international students, the lack of a policy on supervision for PGTs, and the isolation that PGRs had fed into the SU’s strategy exercise.

    When I explained that OfS was planning to introduce a PGT NSS in 2028 and then use that data in the TEF from 2030-31 – such that their university might not have the data taken into account until 2032-33 – I was met with derision. When I explained that PGRs may be incorporated from 2030–31 onwards, I was met with scorn.

    Keen to know how students might feed in, one officer asked how their views would be taken into account. I explained that as well as the NSS, the SU would have the option to create a written submission to provide contextual insight into the numbers. When one of them observed that “being honest in that will be a challenge given student numbers are falling and so is the SU’s funding”, the union’s voice coordinator (who’d been involved in the 2023 exercise) in the corner offered a wry smile.

    One of the officers – who’d had a rewarding time at the university pretty much despite their actual course – wanted to know if the system was going to tackle students like them not really feeling like they’d learned anything during their degree. Given the proposals’ intention to drop educational gain altogether, I moved on at this point. Young people have had enough of being let down.

    I’m not at home in my own home

    Back in February, you might recall that OfS published a summary of a programme of polling and focus groups that it had undertaken to understand what students wanted and needed from their higher education – and the extent to which they were getting it.

    At roughly the same time, it published proposals for a new initial Condition C5: Treating students fairly, to apply initially to newly registered providers, which drew on that research.

    As well as issues it had identified with things like contractual provisions, hidden costs and withdrawn offers, it was particularly concerned with the risk that students may take a decision about what and where to study based on false, misleading or exaggerated information.

    OfS’ own research into the Teaching Excellence Framework 2023 signals one of the culprits for that misleading. Polling by Savanta in April and May 2024, and follow-up focus groups with prospective undergraduates over the summer both showed that applicants consistently described TEF outcomes as too broad to be of real use for their specific course decisions.

    They wanted clarity about employability rates, continuation statistics, and job placements – but what they got instead was a single provider-wide badge. Many struggled to see meaningful differences between Gold and Silver, or to reconcile how radically different providers could both hold Gold.

    The evidence also showed that while a Gold award could reassure applicants, more than one in five students aware of their provider’s TEF rating disagreed that it was a fair reflection of their own experience. That credibility gap matters.

    If the TEF continues to offer a single label for an entire university, with data that are both dated and aggregated, there is a clear danger that students will once again be misled – this time not by hidden costs or unfair contracts, but by the regulatory tool that is supposed to help them make informed choices.

    You don’t know what I’m feeling

    Absolutely central to the TEF will remain results of the National Student Survey (NSS).

    OfS says that’s because “the NSS remains the only consistently collected, UK-wide dataset that directly captures students’ views on their teaching, learning, and academic support,” and because “its long-running use provides reliable benchmarked data which allows for meaningful comparison across providers and trends over time.”

    It stresses that the survey provides an important “direct line to student perceptions,” which balances outcomes data and adds depth to panel judgements. In other words, the NSS is positioned as an indispensable barometer of student experience in a system that otherwise leans heavily on outcomes.

    But set aside the fact that it surveys only those who make it to the final year of a full undergraduate degree. The NSS doesn’t ask whether students felt their course content was up to date with current scholarship and professional practice, or whether learning outcomes were coherent and built systematically across modules and years — both central expectations under B1 (Academic experience).

    It doesn’t check whether students received targeted support to close knowledge or skills gaps, or whether they were given clear help to avoid academic misconduct through essay planning, referencing, and understanding rules – requirements spelled out in the guidance to B2 (Resources, support and engagement). It also misses whether students were confident that staff were able to teach effectively online, and whether the learning environment – including hardware, software, internet reliability, and access to study spaces – actually enabled them to learn. Again, explicit in B2, but invisible in the survey.

    On assessment, the NSS asks about clarity, fairness, and usefulness of feedback, but it doesn’t cover whether assessment methods really tested what students had been taught, whether tasks felt valid for measuring the intended outcomes, or whether students believed their assessments prepared them for professional standards. Yet B4 (Assessment and awards) requires assessments to be valid and reliable, moderated, and robust against misconduct – areas NSS perceptions can’t evidence.

    I could go on. The survey provides snapshots of the learning experience but leaves out important perception checks on the coherence, currency, integrity, and fitness-for-purpose of teaching and learning, which the B conditions (and students) expect providers to secure.

    And crucially, OfS has chosen not to use the NSS questions on organisation and management in the future TEF at all. That’s despite its own 2025 press release highlighting it as one of the weakest-performing themes in the sector – just 78.5 per cent of students responded positively – and pointing out that disabled students in particular reported significantly worse experiences than their peers.

    OfS said then that “institutions across the sector could be doing more to ensure disabled students are getting the high quality higher education experience they are entitled to,” and noted that the gap between disabled and non-disabled students was growing in organisation and management. In other words, not only is the NSS not fit for purpose, OfS’ intended use of it isn’t either.

    I followed the voice, you gave to me

    In the 2023 iteration of the TEF, the independent student submission was supposed to be one of the most exciting innovations. It was billed as a crucial opportunity for providers’ students to tell their own story – not mediated through NSS data or provider spin, but directly and independently. In OfS’ words, the student submission provided “additional insights” that would strengthen the panel’s ability to judge whether teaching and learning really were excellent.

    In this consultation, OfS says it wants to “retain the option of student input,” but with tweaks. The headline change is that the student submission would no longer need to cover “student outcomes” – an area that SUs often struggled with given the technicalities of data and the lack of obvious levers for student involvement.

    On the surface, that looks like a kindness – but scratch beneath the surface, and it’s a red flag. Part of the point of Condition B2.2b is that providers must take all reasonable steps to ensure effective engagement with each cohort of students so that “those students succeed in and beyond higher education.”

    If students’ unions feel unable to comment on how the wider student experience enables (or obstructs) student success and progression, that’s not a reason to delete it from the student submission. It’s a sign that something is wrong with the way providers involve students in what’s done to understand and shape outcomes.

    The trouble is that the light touch response ignores the depth of feedback it has already commissioned and received. Both the IFF evaluation of TEF 2023 and OfS’ own survey of student contacts documented the serious problems that student reps and students’ unions faced.

    They said the submission window was far too short – dropping guidance in October, demanding a January deadline, colliding with elections, holidays, and strikes. They said the guidance was late, vague, inaccessible, and offered no examples. They said the template was too broad to be useful. They said the burden on small and under-resourced SUs was overwhelming, and even large ones had to divert staff time away from core activity.

    They described barriers to data access – patchy dashboards, GDPR excuses, lack of analytical support. They noted that almost a third didn’t feel fully free to say what they wanted, with some monitored by staff while writing. And they told OfS that the short, high-stakes process created self-censorship, strained relationships, and duplication without impact.

    The consultation documents brush most of that aside. Little in the proposals tackles the resourcing, timing, independence, or data access problems that students actually raised.

    I’m not at home in my own home

    OfS also proposes to commission “alternative forms of evidence” – like focus groups or online meetings – where students aren’t able to produce a written submission. The regulator’s claim is that this will reduce burden, increase consistency, and make it easier to secure independent student views.

    The focus group idea is especially odd. Student representatives’ main complaint wasn’t that they couldn’t find the words – it was that they lacked the time, resource, support, and independence to tell the truth. Running a one-off OfS focus group with a handful of students doesn’t solve that. It actively sidesteps the standard in B2 and the DAPs rules on embedding students in governance and representation structures.

    If a student body struggles to marshal the evidence and write the submission, the answer should be to ask whether the provider is genuinely complying with the regulatory conditions on student engagement. Farming the job out to OfS-run focus groups allows providers with weak student partnership arrangements to escape scrutiny – precisely the opposite of what the student submission was designed to do.

    The point is that the quality of a student submission is not just a “nice to have” extra insight for the TEF panel. It is, in itself, evidence of whether a provider is complying with Condition B2. It requires providers to take all reasonable steps to ensure effective engagement with each cohort of students, and says students should make an effective contribution to academic governance.

    If students can’t access data, don’t have the collective capacity to contribute, or are cowed into self-censorship, that is not just a TEF design flaw – it is B2 evidence of non-compliance. The fact that OfS has never linked student submission struggles to B2 is bizarre. Instead of drawing on the submissions as intelligence about engagement, the regulator has treated them as optional extras.

    The refusal to make that link is even stranger when compared to what came before. Under the old QAA Institutional Review process, the student written submission was long-established, resourced, and formative. SUs had months to prepare, could share drafts, and had the time and support to work with managers on solutions before a review team arrived. It meant students could be honest without the immediate risk of reputational harm, and providers had a chance to act before being judged.

    TEF 2023 was summative from the start, rushed and high-stakes, with no requirement on providers to demonstrate they had acted on feedback. The QAA model was designed with SUs and built around partnership – the TEF model was imposed by OfS and designed around panel efficiency. OfS has learned little from the feedback from those who submitted.

    But now I’ve gotta find my own

    While I’m on the subject of learning, we should finally consider how far the proposals have drifted from the lessons of Dame Shirley Pearce’s review. Back in 2019, her panel made a point of recording what students had said loud and clear – the lack of learning gain in TEF was a fundamental flaw.

    In fact, educational gain was the single most commonly requested addition to the framework, championed by students and their representatives who argued that without it, TEF risked reducing success to continuation and jobs.

    Students told the review they wanted a system that showed whether higher education was really developing their knowledge, skills, and personal growth. They wanted recognition of the confidence, resilience, and intellectual development that are as much the point of university as a payslip.

    Pearce’s panel agreed, recommending that Educational Gains should become a fourth formal aspect of TEF, encompassing both academic achievement and personal development. Crucially, the absence of a perfect national measure was not seen as a reason to ignore the issue. Providers, the panel said, should articulate their own ambitions and evidence of gain, in line with their mission, because failing to even try left a gaping hole at the heart of quality assessment.

    Fast forward to now, and OfS is proposing to abandon the concept entirely. To students and SUs who have been told for years that their views shape regulation, the move is a slap in the face. A regulator that once promised to capture the full richness of the student experience is now narrowing the lens to what can be benchmarked in spreadsheets. The result is a framework that tells students almost nothing about what they most want to know – whether their education will help them grow.

    You see the same lack of learning in the handling of extracurricular and co-curricular activity. For students, societies, volunteering, placements, and cocurricular opportunities are not optional extras but integral to how they build belonging, develop skills, and prepare for life beyond university. Access to these opportunities feature heavily in the Access and Participation Risk Register precisely because they matter to student success and because they’re a part of the educational offer in and of themselves.

    But in TEF 2023 OfS tied itself in knots over whether they “count” — at times allowing them in if narrowly framed as “educational”, at other times excluding them altogether. To students who know how much they learn outside of the lecture theatre, the distinction looked absurd. Now the killing off of educational gain excludes them all together.

    You should have listened

    Taken together, OfS has delivered a masterclass in demonstrating how little it has learned from students. As a result, the body that once promised to put student voice at the centre of regulation is in danger of constructing a TEF that is both incomplete and actively misleading.

    It’s a running theme – more evidence that OfS is not interested enough in genuinely empowering students. If students don’t know what they can, should, or could expect from their education – because the standards are vague, the metrics are aggregated, and the judgements are opaque – then their representatives won’t know either. And if their reps don’t know, their students’ union can’t effectively advocate for change.

    When the only judgements against standards that OfS is interested in come from OfS itself, delivered through a very narrow funnel of risk-based regulation, that funnel inevitably gets choked off through appeals to “reduced burden” and aggregated medals that tell students nothing meaningful about their actual course or experience. The result is a system that talks about student voice while systematically disempowering the very students it claims to serve.

    In the consultation, OfS says that it wants its new quality system to be recognised as compliant with the European Standards and Guidelines (ESG), which would in time allow it to seek membership of the European Quality Assurance Register (EQAR). That’s important for providers with international partnerships and recruitment ambitions, and for students given that ESG recognition underpins trust, mobility, and recognition across the European Higher Education Area.

    But OfS’ conditions don’t require co-design of the quality assurance framework itself, nor proof that student views shape outcomes. Its proposals expand student assessor roles in the TEF, but don’t guarantee systematic involvement in all external reviews or transparency of outcomes – both central to ESG. And as the ongoing QA-FIT project and ESU have argued, the next revision of the ESG is likely to push student engagement further, emphasising co-creation, culture, and demonstrable impact.

    If it does apply for EQAR recognition, our European peers will surely notice what English students already know – the gap between OfS’ rhetoric on student partnership and the reality of its actual understanding and actions is becoming impossible to ignore.

    When I told those student officers back on campus that their university would be spending £25,000 of their student fee income every time it has to take part in the exercise, their anger was palpable. When I added that according to the new OfS chair, Silver and Gold might enable higher fees, while Bronze or “Requires Improvement” might cap or further reduce their student numbers, they didn’t actually believe me.

    The student interest? Hardly.

    Source link

  • Back to the future for the TEF? Back to school for OfS?

    Back to the future for the TEF? Back to school for OfS?

    As the new academic year dawns, there is a feeling of “back to the future” for the Teaching Excellent Framework (TEF).

    And it seems that the Office for Students (OfS) needs to go “back to school” in its understanding of the measurement of educational quality.

    Both of these feelings come from the OfS Chair’s suggestion that the level of undergraduate tuition fees institutions can charge may be linked to institutions’ TEF results.

    For those just joining us on TEF-Watch, this is where the TEF began back in the 2015 Green Paper.

    At that time, the idea of linking tuition fees to the TEF’s measure of quality was dropped pretty quickly because it was, and remains, totally unworkable in any fair and reasonable way.

    This is for a number of reasons that would be obvious to anyone who has a passing understanding of how the TEF measures educational quality, which I wrote about on Wonkhe at the time.

    Can’t work, won’t work

    First, the TEF does not measure the quality of individual degree programmes. It evaluates, in a fairly broad-brush way, a whole institution’s approach to teaching quality and related outcomes. All institutions have programmes of variable quality.

    This means that linking tuition fees to TEF outcomes could lead to significant numbers of students on lower quality programmes being charged the higher rate of tuition fees.

    Second, and even more unjustly, the TEF does not give any indication of the quality of education that students will directly experience.

    Rather, when they are applying for their degree programme, it provides a measure of an institution’s general teaching quality at the time of its last TEF assessment.

    Under the plans currently being considered for a rolling TEF, this could be up to five years previously – which would mean it gives a view of educational quality at least nine years before applicants will graduate. Even if it was from the year before they enrol, it will be based on an assessment of evidence that took place at least four years before they will complete their degree programme.

    Those knowledgeable about educational quality understand that, over such a time span, educational quality could have dramatically changed. Given this, on what basis can it be fair for new students to be charged the higher rate of tuition fees as a result of a general quality of education enjoyed by their predecessors?

    These two reasons would make a system in which tuition fees were linked to TEF outcomes incredibly unfair. And that is before we even consider its impact on the TEF as a valid measure of educational quality.

    The games universities play

    The higher the stakes in the TEF, the more institutions will feel forced to game the system. In the current state of financial crisis, any institutional leader is likely to feel almost compelled to pull every trick in the book in order to ensure the highest possible tuition fee income for their institution.

    How could they not given that it could make the difference between institutional survival, a forced merger or the potential closure of their institution? This would make the TEF even less of an effective measure of educational quality and much more of a measure of how effectively institutions can play the system.

    It takes very little understanding of such processes to see that institutions with the greatest resources will be in by far the best position to finance the playing of such games. Making the stakes so high for institutions would also remove any incentive for them to use the TEF as an opportunity to openly identify educational excellence and meaningfully reflect on their educational quality.

    This would mean that the TEF loses any potential to meet its core purpose, identified by the Independent Review of the TEF, “to identify excellence and encourage enhancement”. It will instead become even more of a highly pressurised marketing exercise with the TEF outcomes having potentially profound consequences for the future survival of some institutions.

    In its own terms, the suggestion about linking undergraduate tuition fees to TEF outcomes is nothing to worry about. It simply won’t happen. What is a much greater concern is that the OfS is publicly making this suggestion at a time when it is claiming it will work harder to advocate for the sector as a force for good, and also appears to have an insatiable appetite to dominate the measurement of educational quality in English higher education.

    Any regulator that had the capacity and expertise to do either of these things would simply not be making such a suggestion at any time but particularly not when the sector faces such a difficult financial outlook.

    An OfS out of touch with its impact on the sector. Haven’t we been here before?

    Source link

  • OfS pushes ahead with two tier fairness for students

    OfS pushes ahead with two tier fairness for students

    Good news for students in England. Providers will soon be subject to tough new rules that ensure they’re treated fairly. But only if they’re in a new provider. Elsewhere, it seems, the unfairness can reign on!

    Just a few days before applications to join its register reopen, the Office for Students (OfS) has published consultation outcomes and final decisions on reforms to its registration requirements.

    It sets out the regulator’s decisions following its February 2025 consultation on changes to the entry conditions that higher education providers have to meet to register with OfS, and therefore access student loan funding. It covers:

    • A new initial condition C5 (treating students fairly), replacing the old consumer protection and student protection plan conditions (C1 and C3).
    • New governance conditions E7, E8 and E9, replacing the old governance requirements (E1 and E2).
    • Tighter application requirements, including more detailed financial planning, declarations about investigations, and restrictions on resubmitting applications after refusal.

    Conusingly, the changes interact closely with two separate consultations on subcontracting.

    First, in January 2025 the Department for Education consulted on requiring delivery providers in franchised or subcontractual arrangements to register directly with OfS for their students to be eligible for student support.

    Then, in June 2025 OfS ran its own consultation on the regulation of subcontracted provision, focusing on how such providers would be assessed, overseen, and held accountable if brought into the system.

    These reforms don’t themselves impose registration on subcontracted delivery providers, but they prepare the ground – the new conditions clarify how subcontracted applicants could meet C5 and related requirements, and OfS signals that it is ready to align with whatever the government decides on the January DfE proposals.

    Chin plasters

    We’re several months on now from the initial jaw on the floor moment, but by way of reminder – the main proposals on treating students fairly are justified as follows:

    Providers are facing increasing financial challenges. They must have effective management and governance to navigate those challenges in a way that delivers good student outcomes. Where providers are making tough financial decisions, they must continue to meet the commitments they have made to students. Our engagement with students shows that being treated fairly is very important to them and suggests that too often this does not happen.

    Against that backdrop, and repeated never-met promises to act to address student protection issues, you’d have thought that there would be progress on what is happening inside the 429 providers already on the register. Alas not – its centrepiece proposals on treating students fairly are only to apply to new providers, with a vague commitment to consult on what might be applied to everyone else (closing the stable door) at some point down the line (one the horse has bolted).

    But worse than that, in its infinite wisdom OfS has somehow managed to concoct a situation where for this tiny group of new providers, it will:

    • Remix lots of existing consumer protection law so that instead of talking about consumer rights, it talks about treating students fairly
    • In some areas go further than consumer protection law, because OfS can and has decided to in the student interest
    • In some areas not go as far as consumer protection law, because…. reasons?

    On the topline, what’s now being introduced is a new initial registration condition – C5, “treating students fairly” – that will replace the old consumer protection entry tests for providers seeking to join the OfS register.

    Instead of simply requiring a university or college to show that it has “had due regard” to CMA guidance, applicants will have to demonstrate that they treat students fairly in practice.

    To do that, OfS will review the policies and contracts they intend to use with students, and judge them against a new “prohibited behaviours” list, a detriment test, and any track record of adverse findings under consumer or company law. In effect, OfS is shifting from a box-ticking exercise about compliance to an upfront regulatory judgement about fairness.

    Providers will have to publish a suite of student-facing documents – terms and conditions, course change policies, refund and compensation policies, and complaints processes – which together will constitute their student protection plan.

    And the scope of the new condition is deliberately broad – it covers current, prospective, and former students, higher education and ancillary services like accommodation, libraries, or disability support, and information issued to attract or recruit students, including advertising and online material. In short, C5 sets a new standard of fairness at the point of entry to the system, at least for those providers trying to join it.

    Students aren’t consumers, but they are, or are they

    The problem is the relationship with consumer law. OfS is at pains to stress that new Condition C5 sits comfortably alongside consumer law, drawing on concepts that will be familiar to anyone who has worked with CMA guidance.

    It makes use of the same building blocks – unfair terms, misleading practices, clarity of information – and even names the same statutes.

    But we’re also reminded that C5 is not consumer law – it’s a regulatory condition of registration, judged and enforced by OfS as a matter of regulatory discretion. That means satisfying C5 doesn’t guarantee compliance with the Consumer Rights Act 2015 or the Digital Markets, Competition and Consumers Act 2024, and conversely, complying with the Act doesn’t automatically secure a pass on C5. The frameworks overlap, but they don’t align.

    In some respects C5 goes further. By creating its own “prohibited behaviours list”, OfS has declared that certain contractual terms – which the Consumer Rights Act 2015 would only treat as “grey list” risks – will always be unfair in the student context. Examples include terms that allow a provider to unilaterally withdraw an offer once it has been accepted, clauses that limit liability for disruptions within the university’s own control (like industrial action), or refund policies that impose unreasonable hurdles or delays.

    The list also bans misleading representations such as claiming “degree” or “university” status without proper authority, omitting key information about additional compulsory costs, or publishing fake or cherry-picked student reviews. It even extends to the legibility and clarity of terms and policies, requiring that documents be accessible and understandable to students.

    C5 also sweeps in documents that may not ordinarily have contractual force, like course change policies or compensation arrangements, and makes them part of the fairness test. In that sense, the regulator is demanding a higher standard than the law itself, rooted in its view of the student interest.

    But in other senses, C5 lags behind. Where DMCC now treats omissions of “material information” as unlawful if they’re likely to influence a student’s decision, C5 only bites when omissions cause demonstrable detriment, judged against whether the detriment was “reasonable.”

    DMCC introduces explicit protections for situational vulnerability, and a statutory duty of professional diligence in overseeing agents and subcontractors – neither concept is reflected in C5. DMCC makes universities liable for what their agents say on TikTok about visas or jobs – C5 says providers are accountable too, but stops short of importing the full professional diligence duty that the law now demands. DMCC makes clear that the full price of a degree needs to be set out in advance – including anything you have to buy on an optional module. C5 not so much.

    We will protect you

    The problem with all of that from a student point of view is that the Competition and Markets Authority is going to take one look at all of this and think “that means we don’t have to busy ourselves with universities” – despite the rights being different, and despite no such regulation kicking in in the rest of the UK.

    And worse, it makes the chances of students understanding their rights even thinner than they are now. On that, some respondents asked for wider duties to ensure students actively understand their rights – but OfS’ response is that its focus is on whether documents are fair, clear, and not misleading, and that if issues arise in practice (like if notifications flag that students aren’t being given fair or accurate information), OfS can require further information from the provider and take action.

    How on earth students would know that their rights had been breached, and that they can email an obscure OfS inbox is never explained. Even if students find the webpage, students are told that OfS “will not be able to update you on the progress or outcome of the issue that you have raised”.

    They’d likely make a complaint instead – but even if they got as far as the OIA, unless I’ve missed it I’ve never seen a single instance of OfS taking action (either at strategic/collective level or individual) off the back of the information I’m sure it gets regularly from its friends in Reading.

    I suspect this all means that OfS will now not publish two lots of information for students on their rights, depending on whether they’re new or existing members of the register – because like pretty much every other OfS strategy on the student interest, students are framed as people to be protected by a stretched mothership rather than by giving them some actual power themselves.

    I can make an argument, by the way, that sending complaints to lawyers to be assessed for legal risk to the provider, routinely ignoring the OIA Good Practice Framework, refusing to implement an OIA recommendation, not compensating a group when an individual’s complaint obviously applies to others who didn’t complain, using NDAs on complaints that don’t concern harassment and sexual misconduct, deploying “academic judgment” excuses on any appeal where the student is let down, or the practice of dragging out resolutions and making “deal or no deal” “goodwill” offers to coax exhausted students into settling are all pretty important fairness issues – but the relationship with the OIA in a whole document on fairness is barely mentioned.

    As usual, almost nothing has changed between proposals and outcome – but there’s a few nuggets in there. “Information for students” has been replaced with “information about the provider” – to make clear the duty extends beyond enrolled students and covers all marketing/info materials. The problem is that under DMCC stuff like, for example, misleading information on the cost of living in a given city is material, but under OfS “treating students fairly” doesn’t appear to be “about” the provider.

    OfS has clarified that its concerns about “ancillary services” only applies where there is a contract between student and provider (not with third parties), but has added that providers are responsible for information they publish about third-party services and expects universities to exercise “due diligence” on them and their contracts.

    Some language has been more closely aligned with the DMCCA on things like omissions and fake reviews), and in its “detriment” test providers now must do “everything reasonable” rather than “everything possible” to limit it.

    Banned practices

    In some ways, it would have been helpful to translate consumer law and then go further if necessary. But looking at the overlap between the CMA’s unfair commercial practices regime and OfS’s prohibited behaviours list reveals some odd gaps.

    OfS has borrowed much of the language around misleading marketing, fake reviews, false urgency, and misused endorsements, but it has not imported the full consumer protection arsenal. The result is that students don’t seem to be guaranteed the same protections they would enjoy if they were buying a car, a washing machine, or even a mobile phone contract.

    General CMA guidance prevents companies from mimicking the look of competitors to confuse buyers – but the practice is not explicitly barred by OfS. The CMA bans direct appeals to children – no mention of the vulnerable consumer / due diligence duties in OfS’ stuff. Under DMCC, a practice that requires a consumer to take onerous or disproportionate action in order to exercise rights that they have in relation to a product or service is banned – but there’s little on that from OfS.

    Fee increases

    One note on fees and increases – in the response, OfS points to a “statement” that anyone with an Access and Participation Plan has to submit on whether it will increase fees. It supposedly has to specify the “objective verifiable index” that would be used (for example, the Retail Price Index or the Consumer Price Index), in all cases the amount must not exceed the maximum amount prescribed by the Secretary of State for Education, and under consumer protection law, all students must have a right to cancel a contract in the event of a price increase, even where that price increase is provided for in the contact.

    Here’s the first five I found in approved Access and Participation Plans on Google:

    • “Our intention is to charge the maximum fee, subject to the fee limits set out in Regulations” (the doesn’t seem compliant to me)
    • “We will not raise fees annually for 2024-25 new entrants” (that one from a provider that has announced that it will after all)
    • “We will not raise fees annually for 2024-25 new entrants” (that from a provider who now says that for those who started before 1 August 2025, the continuing fee will be £9,535)
    • “We will not raise fees annually for new entrants” (that from a provider that now says “the fee information and inflation statement provided on page 69 of our 2025/26 to 2028/29 Access and Participation Plan are no longer current)
    • “Subject to the maximum fee limits set out in Regulations we will increase fees each year using RPI-X” (what it’s actually doing is increasing its fees by RPI-X as projected by the OBR, which is a very different figure, and no way would pass muster as an “objective verifiable index”

    I’d add here to this utterly laughable situation that the CMA is very clear that the right to cancel in the event of a material change or price increase has to be exercisable in practice:

    In the HE sector, switching course or, in some cases, withdrawing and switching HE provider, is likely to be difficult or impractical in practice, bearing in mind that in many cases the student will not be able simply to transfer their credits to another HE provider, and so saying the student can switch may not improve matters for them, or alleviate the potential unfairness of a variation.

    I’m not sure there’s a provider in the country that’s compliant with that.

    Wider changes

    On its reforms to registration requirements, the exciting news is that rather than introduce one new Condition of Registration, there’s going to be three – E7 (governing documents and business plan), E8 (fraud and inappropriate use of public funds) and E9 (on fit and proper persons, knowledge and expertise).

    In the future, providers will have to submit a defined set of governing documents at registration – replacing the previous reliance on self-assessment against public interest governance principles. Providers will also have to submit a clear and comprehensive five-year business plan showing objectives, risks, compliance with ongoing conditions, and consideration of students’ interests.

    Specific senior roles (chair of governing body, accountable officer, finance lead, and an independent governor) will have to demonstrate sufficient knowledge and expertise, usually tested through interviews. And a new fit and proper persons test will mean that those in senior governance and management roles will be subject to checks on past conduct (e.g. fraud, misconduct, behaviour undermining public trust).

    Providers will also have to have comprehensive and effective arrangements to prevent, detect, and stop fraud and the inappropriate use of public funds. A “track record” test also applies, the upshot of which is that relevant convictions or regulatory sanctions within the past 60 months could bar registration unless exceptional circumstances apply.

    You’ll not be surprised to learn that in the consultation, some worried that the changes would increase bureaucracy, slow down registration, and impose disproportionate burdens on smaller providers. Others objected to the removal of self-assessment against the Public Interest Governance Principles (PIGPs) at the point of registration, fearing this would dilute student protection or cause confusion given that PIGPs still apply on an ongoing basis.

    Concerns were also raised about creating a two-tier system where new entrants face tougher entry requirements than established providers, and about the practicality of requiring a five-year business plan when forecasting beyond two or three years is often unrealistic. Many also questioned a new interview requirement for key individuals, seeing it as costly, stressful, open to coaching, and potentially inconsistent. Just like student assessment!

    OfS was right all along, of course – arguing that the new conditions give stronger protection for students and taxpayers, that the five-year planning horizon is essential to test medium-term sustainability, and maintains that fit and proper person interviews are the most effective way to test leadership capacity.

    If you were one of the handful of respondents, it wasn’t all in vain – the phrase “policies and procedures” is now “policies and processes”, OfS has clarified the level of knowledge required (the chair and independent governor only need “sufficient awareness” of student cohorts rather than detailed operational knowledge) and a minimum requirement for fraud prevention arrangements is now in the actual condition (rather than just in guidance).

    Registering with OfS

    Much of that is now reflected in a tightening of the registration process itself. Applicants will now be required to submit a defined set of final, governing-body-approved documents at the point of application – including governing documents, financial forecasts, business plans, and information on ownership and corporate structure.

    The idea is to eliminate the previous piecemeal approach, under which providers often submitted partial or draft materials, and to ensure that applications arrive complete, coherent, and capable of demonstrating that a provider has the resources and arrangements necessary to comply with the ongoing conditions of registration.

    Some argued that the shift makes the process more rigid and burdensome, particularly for smaller or specialist providers, and warned that requiring fully approved documents could create practical difficulties or delay applications. Others were worried about duplication with other regulators and barriers to entry for innovative providers.

    Again, OfS is pressing on regardless, arguing that a standardised approach will improve efficiency and consistency, while promising proportionate application of the rules, detailed guidance on the required documents, and limited flexibility where a final document cannot yet exist.

    To the extent to which some might argue that a heavy and complex burden is a tough ask for small new providers – and runs counter to the original Jo Johnson “Byron Burgers” vision, the message seems to be that it turns out that scale and complexity is required to protect public money and the student interest. It would arguably be a lot easier (on both OfS and Independent HE’s members) if DfE was to just say so.

    Defeat from the jaws of victory

    Sometimes, OfS gets close to getting it – finally, an education regulator properly thinking through the ways in which students are treated unfairly – only to go and spoil it and say something stupid like “this will only apply to new providers”.

    As I noted when the consultation came out, what we now have is one set of rights for students in a new(ly registering) provider that they’ll never be proactively told about, and another set of much weaker ones for everyone else that they’re not told about either, all in the name of “fairness”, at exactly the point that the regulator itself admits is one where providers are under pressure to not deliver on some of the promises they made to students.

    The lack of justification or explanation for that remains alarming – and while cock up is often a better explanation than conspiracy, it’s hard to conclude anything other than OfS has proactively decided to turn a blind eye (while blindfolding students) to existing unfairness while everyone gets their cuts done. What a time to be a student.

    Source link

  • OfS Outcomes (B3) data, 2025

    OfS Outcomes (B3) data, 2025

    The Office for Students’ release of data relating to Condition of Registration B3 is the centerpiece of England’s regulator’s quality assurance approach.

    There’s information on three key indicators: continuation (broadly, the proportion of students who move from year one to year two), completion (pretty much the proportion who complete the course they sign up for), and progression (the proportion who end up in a “good” destination – generally high skilled employment or further study).

    Why B3 data is important

    The power comes from the ability to view these indicators for particular populations of students – everything from those studying a particular subject and those with a given personal characteristic, through to how a course is delivered. The thinking goes that this level of resolution allows OfS to focus in on particular problems – for example a dodgy business school (or franchise delivery operation) in an otherwise reasonable quality provider.

    The theory goes that OfS uses these B3 indicators – along with other information such as notifications from the public, Reportable Event notifications from the provider itself, or (seemingly) comment pieces in the Telegraph to decide when and where to intervene in the interests of students. Most interventions are informal, and are based around discussions between the provider and OfS about the identified problem and what is being done to address it. There have been some more formal investigations too.

    Of course, providers themselves will be using similar approaches to identify problems in their own provision – in larger universities this will be built into a sophisticated data-driven learner analytics approach, while some smaller providers primarily what is in use this release (and this is partly why I take the time to build interactives that I feel are more approachable and readable than the OfS versions).

    Exploring B3 using Wonkhe’s interactive charts

    These charts are complicated because the data itself is complicated, so I’ll go into a bit of detail about how to work them. Let’s start with the sector as a whole:

    [Full screen]

    First choose your indicator: Continuation, completion, and progression.

    Mode (whether students are studying full time, part time, or on an apprenticeship) and level (whether students are undergraduate, postgraduate, and so on) are linked: there are more options for full and part time study (including first degree, taught postgraduate, and PhD) and less for apprenticeships (where you can see either all undergraduates or all postgraduates).

    The chart shows various splits of the student population in question – the round marks show the actual value of the indicator, the crosses show the current numeric threshold (which is what OfS has told us is the point below which it would start getting stuck in to regulating).

    Some of the splits are self-explanatory, others need a little unpacking. The Index of Multiple Deprivation (IMD) is a standard national measure of how socio-economically deprived a small area is – quintile 1 is the most deprived, quintile 5 is the least deprived. Associations Between Characteristics of Students (ABCs) is a proprietary measure developed by OfS which is a whole world of complexity: here all you need to know is that quintile five is more likely to have good outcomes on average, and quintile 1 are least likely to have good outcomes.

    If you mouse over any of the marks you will get some more information: the year(s) of data involved in producing the indicator (by definition most of this data refers to a number of years ago and shouldn’t really be taken as an indication of a problem that is happening right now), and the proportion of the sample that is above or below the threshold. The denominator is simply the number of students involved in each split of the population.

    There’s also a version of this chart that allows you to look at an individual provider: choose that via the drop down in the middle of the top row.

    [Full screen]

    You’ll note you can select your population: Taught or registered includes students taught by the provider and students who are registered with a provider but taught elsewhere (subcontracted out), taught only is just those students taught by a provider (so, no subcontractual stuff), partnership includes only students where teaching is contracted out or validated (the student is both registered and taught elsewhere, but the qualification is validated by this provider)

    On the chart itself, you’ll see a benchmark marked with an empty circle: this is what OfS has calculated (based on the characteristics of the students in question) the value of the indicator should be – the implications being that the difference from the benchmark is entirely the fault of the provider. In the mouse-over I’ve also added the proportion of students in the sample above and below the benchmark.

    OfS take great pains to ensure that B3 measures can’t be seen as a league table, as this would make their quality assurance methodology look simplistic and context-free. Of course, I have built a league table anyway just to annoy them: the providers are sorted by the value of the indicator, with the other marks shown as above (note that not all options have a benchmark value). Here you can select a split indicator type (the group of characteristics you are interested in) and then the split indicator (specific characteristic) you want to explore using the menus in the middle of the top row – the two interact and you will need to set them both.

    You can find a provider of interest using the highlighter at the bottom, or just mouse over a mark of interest to get the details on the pop-up.

    [Full screen]

    With so much data going on there is bound to be something odd somewhere – I’ve tried to spot everything but if there’s something I’ve missed please let me know via an email or a comment. A couple of things you may stumble on – OfS has suppressed data relating to very small numbers of students, and if you ever see a “null” value for providers it refers to the averages for the sector as a whole.

    Yes, but does it regulate?

    It is still clear that white and Asian students have generally better outcomes than those from other ethnicities, that a disadvantaged background makes you less likely to do well in higher education, and that students who studied business are less likely to have a positive progression outcome than those who studied the performing arts.

    You might have seen The Times running with the idea that the government is contemplating restrictions on international student visas linked to the completion rates of international students. It’s not the best idea for a number of reasons, but should it be implemented a quick look at the ranking chart (domicile; non-uk) will let you know which providers would be at risk in that situation: for first degree it’s tending towards the Million Plus end of things, for taught Masters provision we are looking at smaller non-traditional providers.

    Likewise, the signs are clear that a crackdown on poorly performing validated provision is incoming – using the ranking chart again (population type: partnership, splits: type of partnerships – only validated) shows us a few places that might have completion problems when it comes to first degree provision.

    If you are exploring these (and I bet you are!) you might note some surprisingly low denominator figures – surely there has been an explosion in this type of provision recently? This demonstrates the achillies heel of the B3 data: completion data relates to pre-pandemic years (2016-2019), continuation to 2019-2022. Using four years of data to find an average is useful when provision isn’t changing much – but given the growth of validation arrangements in recent years, what we see here tells us next to nothing about the sector as it currently is.

    Almost to illustrate this point, the Office for Students today announced an investigation into the sub-contractual arrangement between Buckinghamshire New University and the London School of Science and Technology. You can examine these providers in B3 and if you look at the appropriate splits you can see plenty of others that might have a larger problem – but it is what is happening in 2025 that has an impact on current students.

    Source link

  • The Critical Role of University Leaders in Shaping Safer Cultures and Meeting OfS Condition E6 on Harassment and Sexual Misconduct

    The Critical Role of University Leaders in Shaping Safer Cultures and Meeting OfS Condition E6 on Harassment and Sexual Misconduct

    Harassment and sexual misconduct have no place on our university campuses, nor in wider society. Yet, both continue to be pervasive. The Office for National Statistics reports that 1 in 10 people aged 16 years and over experienced at least one form of harassment in the previous 12 months, while the Crime Survey for England and Wales reveals that “an estimated 7.9 million (16.6%) adults aged 16 years and over had experienced sexual assault since the age of 16 years”. The adverse sequelae for victims/survivors are well documented. 

    The Office for Students (OfS), noting the absence of national-level data at higher education institutions (HEIs),  piloted the design and delivery of a national sexual misconduct prevalence survey in 2023 (full survey due to be reported in September 2025). The study, involving 12 volunteering institutions, found 20% of participating students experienced sexual harassment and 9% experienced sexual assault/violence. The 4% response rate requires cautious interpretation of the findings; however, they are in line with other studies.

    Over the last decade, universities have taken these matters more seriously, appreciating both the impact on victims/survivors and on their institution’s culture and reputation. In 2016, Universities UK and Pinsent Mason published guidance (updated in 2022) for HEIs on managing student misconduct, including sexual misconduct and that which may constitute a crime.  As of 1 August 2025, the OfS has sought to strengthen universities’ actions through introducing condition E6 to ensure institutions enact robust, responsive policies to address harassment and sexual misconduct, as well as promote a proactive, preventative culture.  Our experience, however, suggests that universities’ preparedness is varied, and the deadline is not far away.

    Culture Starts at the Top

    Organisational culture is shaped significantly by those at the top. At its heart is ‘the way things are done around here’: the established, normative patterns of behaviour and interaction that have come to be. Senior leaders have the power to challenge and change entrenched patterns of behaviour or to reinforce them. Thus, compliance with Condition E6 is just a starting point; herein lies an opportunity for university leaders to lean deeper into transforming institutional culture to the benefit of all.

    Understandably, times of significant financial challenge may cause executive teams to quail at more demand on limited resource. This can precipitate a light-touch, bare minimum and additive approach; that is, devolving almost exclusive responsibility to a university directorate to work out how to do even more with less.  Yet, the manifold benefits of inclusive cultures are well established, including improved performance and productivity and lower rates of harassment and sexual violence. Leadership attention to and engagement in building a positive culture will see wider improvements follow. Moreover, hard though it is to write this, we know from our own work in the sector that some leaders or teams are not modelling the ‘right’ behaviour.

    Ultimately, the imperative to transform culture is in the best interests of the institution although it should also manifest a desire for social justice. Consequently, university governors need to understand and have oversight of the imperative; though narrowly defined as regulatory, it should be strategically defined as the route to creating a happier, healthier and more productive community likely to generate the outputs and outcomes the governing authority seek for a successful and sustainable institution.

    Creating Safer Cultures

    We use the term ‘safer culture’ to refer to a holistic organisational environment that is intolerant of harassment, discrimination, and mistreatment in any form. Underpinning the sustainable development of a safer culture are eight key pillars:

    1. Leadership Commitment, Governance and Accountability
      Senior leaders and university governors need to visibly and actively promote an inclusive and respectful culture, holding themselves – and others – accountable.  Strategic allocation of resources and institutional infrastructure needs to support cultural change, and governance mechanisms must enable assurance against objectives.  A whole-institution approach is required to avoid commitments becoming initiative-based, siloed, inconsistent, or symbolic: the responsibility should be shared and collective.
    2. Clear Policies, Procedures and Systems
      Institutions need to develop accessible policies that define inappropriate behaviour, including harassment and sexual misconduct, and outline clear consequences for non-adherence. Associated procedures and systems should support effective prevention and response measures.
    3. Training and Development
      A tiered training approach should be adopted to embed shared understanding, develop capability and confidence, raise awareness, and foster appropriate levels of accountability across the organisation: among students and staff, including the executive team and governing body. Specialist skills training for those in frontline and support roles is essential.
    4. Reporting Processes
      Simple, reliable, confidential, and trusted reporting mechanisms are required. These must protect against retaliation, the need to repeat disclosure information unnecessarily, and provide swift access to appropriate support through a minimum of touchpoints.
    5. Provision of Support
      A trauma-informed, empathetic environment is crucial to ensure individuals feel safe and supported, whether they are disclosing misconduct or have been accused of such. User-focused support systems and wellbeing services need to be in place for all members of the university’s community.
    6. Investigation and Resolution
      Fair, timely, and impartial processes are required which uphold the rights of all parties and enforce meaningful consequences when misconduct is confirmed. Those involved must be appropriately trained and supported to ensure just outcomes for all.
    7. Risk Management
      Risk should be proactively identified and appropriately managed. Individuals throughout the organisation need to understand their responsibility in relation to risk, both individual and institutional.
    8. Investigation and Resolution
      Creating a safer culture requires regular evaluation through policy review, data analysis and reporting, including staff and student feedback. This is essential to address emerging issues, enhance interventions in line with changing policy and practice, and achieve cultural maturity.

    A Leadership Imperative

    The imminent introduction of condition E6 offers university leaders an opportunity to bring renewed and purposeful focus to developing an institutional culture that is safe, respectful and high achieving – the very foundation of academic excellence, creativity and innovation. At a time when equity, diversity and inclusion are under threat worldwide, including in the UK, the imperative has never been greater.

    Source link