Tag: regulation

  • Government AI regulation could censor protected speech online

    Government AI regulation could censor protected speech online

    Edan Kauer is a former FIRE intern and a sophomore at Georgetown University.


    Elliston Berry was just 14 years old when a male classmate at Aledo High in North Texas used AI to create fake nudes of her based on images he took from her social media. He then did the same to seven other girls at the school and shared the images on Snapchat. 

    Now, two years later, Berry and her classmates are the inspiration for Senator Ted Cruz’s Take It Down Act (TIDA), a recently enacted law which gives social media platforms 48 hours to remove “revenge porn” once reported. The bill considers any non-consensual intimate imagery (NCII), including AI deepfakes, to fall under this category. But despite the law’s noble intentions, its dangerously vague wording is a threat to free speech.

    This law, which covers both adults and minors, makes it illegal to publish an image of an identifiable minor that meets the definition of “intimate visual depiction,” which is defined as certain explicit nudity or sexual conduct,  with intent to “arouse or gratify the sexual desire of any person” or “abuse, humiliate, harass, or degrade the minor.” 

    Artificial intelligence, free speech, and the First Amendment

    FIRE offers an analysis of frequently asked questions about artificial intelligence and its possible implications for free speech and the First Amendment.


    Read More

    That may sound like a no-brainer, but deciding what content this text actually covers, including what counts as “arousing,” “humiliating,” or “degrading” is highly subjective. This law risks chilling protected digital expression, prompting  social media platforms  to censor harmless content like a family beach photo, sports team picture, or images of injuries or scars to avoid legal penalties or respond to bad-faith reports.

    Civil liberties groups such as the Electronic Frontier Foundation (EFF) have noted that the language of the law itself raises censorship concerns because it’s vague and therefore easily exploited:

    Take It Down creates a far broader internet censorship regime than the Digital Millennium Copyright Act (DMCA), which has been widely abused to censor legitimate speech. But at least the DMCA has an anti-abuse provision and protects services from copyright claims should they comply. This bill contains none of those minimal speech protections and essentially greenlights misuse of its takedown regime … Congress should focus on enforcing and improving these existing protections, rather than opting for a broad takedown regime that is bound to be abused. Private platforms can play a part as well, improving reporting and evidence collection systems. 

    Nor does the law cover the possibility of people filing bad-faith reports.

    In the 2002 case Ashcroft v. Free Speech Coalitionthe Court said the language of the Child Pornography Protection Act (CPPA) was so broad that it could have been used to censor protected speech. Congress passed the CPPA to combat the circulation of computer-generated child pornography, but as Justice Anthony Kennedy explained in the majority opinion, the language of the CPPA could be used to censor material that seems to depict child pornography without actually doing so.

    While we must acknowledge that online exploitation is a very real issue, we cannot solve the problem at the expense of other liberties.

    Also in 2002, the Supreme Court heard the case Ashcroft v. ACLU, which came about after Congress passed the Child Online Protection Act (COPA) to prevent minors from accessing adult content online. But again, due to the broad language of the bill, the Court found this law would restrict adults who are within their First Amendment rights to access mature content.

    As with the Take It Down Act, here too were laws created to protect children from sexual exploitation online, yet established using vague and overly broad standards that threaten protected speech.

    But unfortunately, stories like the one at Aledo High are becoming more common as AI becomes more accessible. Last year, boys at Westfield High School in New Jersey used AI to circulate fake nudes of Francesca Mani, who is 14 years old, and other girls in her class. But Westfield High administrators were caught off guard as they had never experienced this type of incident. Although the Westfield police were notified and the perpetrators were suspended for up to 2 days, parents criticized the school for their weak response. 

    So to Speak podcast: ‘Robotica: Speech Rights & Artificial Intelligence’

    A year later, the school district developed a comprehensive AI policy and amended their bullying policy to cover harassment carried out through “electronic communication” which includes “the use of electronic means to harass, intimidate, or bully including the use of artificial intelligence “AI” technology.” What’s true for Westfield High is true for America — existing laws are often more than adequate to deal with emerging tech issues. By classifying AI material under electronic communication as a category of bullying, Westfield High demonstrates that the creation of new AI policies are redundant. On a national scale, the same can be said for classifying and prosecuting instances of child abuse online.

    While we must acknowledge that online exploitation is a very real issue, we cannot solve the problem at the expense of other liberties. Once we grant the government the power to silence the voices we find distasteful, we open the door to censorship. Though it is essential to address the very real harms of emerging AI technology, we must also keep our First Amendment rights intact.

    Source link

  • TEF6: the incredible machine takes over quality assurance regulation

    TEF6: the incredible machine takes over quality assurance regulation

    If you loved the Teaching Excellence Framework, were thrilled by the outcomes (B3) thresholds, lost your mind for the Equality of Opportunity Risk Register, and delighted to the sporadic risk-based OfS investigations based on years-old data you’ll find a lot to love in the latest set of Office for Students proposals on quality assurance.

    In today’s Consultation on the future approach to quality regulation you’ll find a cyclical, cohort based TEF that also includes a measurement (against benchmarks) of compliance with the thresholds for student outcomes inscribed in the B3 condition. Based on the outcomes of this super-TEF and prioritised based on assessment of risk, OfS will make interventions (including controls on recruitment and the conditions of degree awarding powers) and targeted investigation. This is a first stage consultation only, stage two will come in August 2026.

    It’s not quite a grand unified theory: we don’t mix in the rest of the B conditions (covering less pressing matters like academic standards, the academic experience, student support, assessment) because, in the words of OfS:

    Such an approach would be likely to involve visits to all providers, to assess whether they meet all the relevant B conditions of registration

    The students who are struggling right now with the impacts of higher student/staff ratios and a lack of capacity due to over-recruitment will greatly appreciate this reduction in administrative burden.

    Where we left things

    When we last considered TEF we were expecting an exercise every four years, drawing on provider narrative submissions (which included a chunk on a provider’s own definition and measurement of educational gain), students’ union narrative submissions, and data on outcomes and student satisfactions. Providers were awarded a “medal” for each of student outcomes and student experience – a matrix determined whether this resulted in an overall Bronze, Silver, Gold or Requires Improvement.

    The first three of these awards were deemed to be above minimum standards (with slight differences between each), while the latter was a portal to the much more punitive world of regulation under group B (student experience) conditions of registration. Most of the good bits of this approach came from the genuinely superb Pearce Review of TEF conducted under section 26 of the Higher Education and Research Act, which fixed a lot of the statistical and process nonsense that had crept in under previous iterations and then-current plans (though not every recommendation was implemented).

    TEF awards were last made in 2023, with the next iteration – involving all registered providers plus anyone else who wanted to play along – was due in 2027.

    Perma-TEF

    A return to a rolling TEF rather than a quadrennial quality enhancement jamboree means a pool of TEF assessors rather than a one-off panel. There will be steps taken to ensure that an appropriate group of academic and student assessors is selected to assess each cohort – there will be special efforts made to use those with experience of smaller, specialist, and college-based providers – and a tenure of two-to-three years is planned. OfS is also considering whether its staff can be included among the storied ranks of those empowered to facilitate ratings decisions.

    Likewise, we’ll need a more established appeals system. Open only to those with Bronze or Needs Improvement ratings (Gold and Silver are passing grades) it would be a way to potentially forestall engagement and investigations based on an active risk to student experience or outcomes, or a risk of a future breach of a condition of registration for Bronze or Requires Improvement.

    Each provider would be assessed once every three years – all providers taking part in the first cycle would be assessed in either 2027-28, 2028-29, or 2029-30 (which covers only undergraduate students because there’s no postgraduate NSS yet – OfS plan to develop one before 2030). In many cases they’ll only know which one at the start of the academic year in question, which will give them six months to get their submissions sorted.

    Because Bronze is now bad (rather than “good but not great” as it used to be) the first year’s could well include all providers with a 2023 Bronze (or Requires Improvement) rating, plus some with increased risks of non-compliance, some with Bronze in one of the TEF aspects, and some without a rating.

    After this, how often you are assessed depends on your rating – if you are Gold overall it is five years till the next try, Silver means four years, and Bronze three (if you are “Requires Improvement” you probably have other concerns beyond the date of your next assessment) but this can be tweaked if OfS decides there is an increased risk to quality or for any other reason.

    Snakes and ladders

    Ignore the gradations and matrices in the Pearce Review – the plan now is that your lowest TEF aspect rating (remember you got sub-awards last time for student experience and student outcomes) will be your overall rating. So Silver for experience and Bronze for outcomes makes for an overall Bronze. As OfS has decided that you now have to pay (likely around £25,000) to enter what is a compulsory exercise this is a cost that could lead to a larger cost in future.

    In previous TEFs, the only negative consequence for those outside of the top ratings have been reputational – a loss of bragging rights of, arguably, negligible value. The new proposals align Bronze with the (B3) minimum required standards and put Requires Improvement below these: in the new calculus of value the minimum is not good enough and there will be consequences.

    We’ve already had some hints that a link to fee cap levels is back on the cards, but in the meantime OfS is pondering a cap on student numbers expansion to punish those who turn out Bronze or Requires Improvement. The workings of the expansion cap will be familiar to those who recall the old additional student numbers process – increases of more than five per cent (the old tolerance band, which is still a lot) would not be permitted for poorly rated providers.

    For providers without degree awarding powers it is unlikely they will be successful in applying for them with Bronze and below – but OfS is also thinking about restricting aspects of existing providers DAPs, for example limiting their ability to subcontract or franchise provision in future. This is another de facto numbers cap in many cases, and is all ahead of a future consultation on DAPs that could make for an even closer link with TEF.

    Proposals for progression

    Proposal 6 will simplify the existing B3 thresholds, and integrate the way they are assessed into the TEF process. In a nutshell, the progression requirement for B3 would disappear – with the assessment made purely on continuation and completion, with providers able to submit contextual and historic information to explain why performance is not above the benchmark or threshold as a part of the TEF process.

    Progression will still be considered at the higher levels of TEF, and here contextual information can play more of a part – with what I propose we start calling the Norland Clause allowing providers to submit details of courses that lead to jobs that ONS does not consider as professional or managerial. That existing indicator will be joined by another based on (Graduate Outcomes) graduate reflections on how they are using what they have learned, and benchmarked salaries three years after graduation from DfE’s Longitudinal Educational Outcomes (LEO) data – in deference to that random Kemi Badenoch IFS commission at the tail end of the last parliament.

    Again, there will be contextual benchmarks for these measures (and hopefully some hefty caveating on the use of LEO median salaries) – and, as is the pattern in this consultation, there are detailed proposals to follow.

    Marginal gains, marginal losses

    The “educational gains” experiment, pioneered in the last TEF, is over: making this three times that a regulator in England has tried and failed to include a measure of learning gain in some form of regulation. OfS is still happy for you to mention your education gain work in your next narrative submission, but it isn’t compulsory. The reason: reducing burden, and a focus on comparability rather than a diversity of bespoke measures.

    Asking providers what something means in their context, rather than applying a one-size-fits-all measure of student success was an immensely powerful component of the last exercise. Providers who started on that journey at considerable expense in data gathering and analysis may be less than pleased at this latest development – and we’d certainly understood that DfE were fans of the approach too.

    Similarly, the requirement for students to feed back on students in their submissions to TEF has been removed. The ostensible reason is that students found it difficult last time round – the result is that insight from the valuable networks between existing students and their recently graduated peers is lost. The outcomes end of TEF is now very much data driven with only the chance to explain unusual results offered. It’s a retreat from some of the contextual sense that crept in with the Pearce Review.

    Business as usual

    Even though TEF now feels like it is everywhere and for always, there’s still a place for OfS’ regular risk-based monitoring – and annex I (yes, there’s that many annexes) contains a useful draft monitoring tool.

    Here it is very good to see staff:student ratios, falling entry requirements, a large growth in foundation year provision, and a rapid growth in numbers among what are noted as indicators of risk to the student experience. It is possible to examine an excellent system designed outside of the seemingly inviolate framework of the TEF where events like this would trigger an investigation of provider governance and quality assurance processes.

    Alas, the main use of this monitoring is to decide whether or not to bring a TEF assessment forward, something that punts an immediate risk to students into something that will be dealt with retrospectively. If I’m a student on a first year that has ballooned from 300 to 900 from one cycle to the next there is a lot of good a regulator can do by acting quickly – I am unlikely to care whether a Bronze or Silver award is made in a couple of years’ time.

    International principles

    One of the key recommendations of the Behan review on quality was a drawing together of the various disparate (and, yes, burdensome) streams of quality and standards assurance and enhancement into a unified whole. We obviously don’t quite get there – but there has been progress made towards another key sector bugbear that came up both in Behan and the Lords’ Industry and Regulators Committee review: adherence to international quality assurance standards (to facilitate international partnerships and, increasingly, recruitment).

    OfS will “work towards applying to join the European Quality Assurance Register for Higher Education” at the appropriate time – clearly feeling that the long overdue centring of the student voice in quality assurance (there will be an expanded role for and range of student assessors) and the incorporation of a cyclical element (to desk assessments at least) is enough to get them over the bar.

    It isn’t. Principle 2.1 of the EQAR ESG requires that “external quality assurance should address the effectiveness of the internal quality assurance processes” – philosophically establishing the key role of providers themselves in monitoring and upholding the quality of their own provision, with the external assurance process primarily assessing whether (and how well) this has been done. For whatever reason OfS believes the state (in the form of the regulator) needs to be (and is capable of being!) responsible for all, quality assurance everywhere, all the time. It’s a glaring weakness of the OfS system that urgently needs to be addressed. And it hasn’t been, this time.

    The upshot is that while the new system looks ESG-ish, it is unlikely to be judged to be in full compliance.

    Single word judgements

    The recent use of single headline judgements of educational quality being used in ways that have far reaching regulatory implications is hugely problematic. The government announced the abandonment of the old “requires improvement, inadequate, good, and outstanding” judgements for schools in favour of a more nuanced “report card approach” – driven in part by the death by suicide of headteacher Ruth Perry in 2023. The “inadequate” rating given to her Cavendish Primary School would have meant forced academisation and deeper regulatory oversight.

    Regulation and quality assurance in education needs to be rigorous and reliable – it also needs to be context-aware and focused on improvement rather than retribution. Giving single headline grades cute, Olympics-inspired names doesn’t really cut it – and as we approach the fifth redesign of an exercise that has only run six times since 2016 you would perhaps think that rather harder questions need to be asked about the value (and cost!) of this undertaking.

    If we want to assess and control the risks of modular provision, transnational education, rapid expansion, and a growing number of innovations in delivery we need providers as active partners in the process. If we want to let universities try new things we need to start from a position that we can trust universities to have a focus on the quality of the student experience that is robust and transparent. We are reaching the limits of the current approach. Bad actors will continue to get away with poor quality provision – students won’t see timely regulatory action to prevent this – and eventually someone is going to get hurt.

    Source link

  • People want AI regulation — but they don’t trust the regulators

    People want AI regulation — but they don’t trust the regulators

    Generative AI is changing the way we learn, think, discover, and create. Researchers at UC San Diego are using generative AI technology to accelerate climate modeling. Scientists at Harvard Medical School have developed a chatbot that can help diagnose cancers. In BelarusVenezuela, and Russia, political dissidents and embattled journalists have created AI tools to bypass censorship.

    Despite these benefits, a recent global survey from The Future of Free Speech, a think tank where I am the executive director, finds that people around the world support strict guardrails — whether imposed by companies or governments — on the types of content that AI can create.

    These findings were part of a broader survey that ranked 33 countries on overall support for free speech, including on controversial but legal topics. In every country, even high-scoring ones, fewer than half supported AI generating content that, for instance, might offend religious beliefs or insult the national flag — speech that would be protected in most democracies. While some people might find these topics beyond reproach, the ability to question these orthodoxies is a fundamental freedom that underpins free and open societies.

    This tension reflects two competing approaches for how societies should harness AI’s power. The first, “User Empowerment,” sees generative AI as a powerful but neutral tool. Harm lies not in the tool itself, but in how it’s used and by whom. This approach affirms that free expression includes not just the right to speak, but the right to access information across borders and media — a collective good essential to informed choice and democratic life. Laws should prohibit using AI to commit fraud or harassment, not ban AI from discussing controversial political topics.

    The second, “Preemptive Safetyism,” treats some speech as inherently harmful and seeks to block it before it’s even created. While this instinct may seem appealing given the potential for using AI to supercharge harm production, it risks turning AI into a tool of censorship and control, especially in the hands of powerful corporate or political actors.

    As AI becomes an integrated operating system in our everyday life, it is critical that we not cut off access to ideas and information that may challenge us. Otherwise, we risk limiting human creativity and stifling scientific discovery.

    Concerns over AI moderation

    In 2024, The Future of Free Speech analyzed the policies of six major chatbots and tested 268 prompts to see how they handled controversial but legal topics, such as the participation of transgender athletes in women’s sports and the “lab-leak” theory. We found that chatbots refused to generate content for more than 40% of prompts. This year, we repeated our tests and found that refusal rates dropped significantly to about 25% of the time.

    Despite these positive developments, our survey’s findings indicate that people are comfortable with companies and governments erecting strict guardrails on what their AI chatbots can generate, which may result in large-scale government-mandated corporate control of users’ access to information and ideas.

    Overwhelming opposition to political deepfakes

    Unsurprisingly, the category of AI content that received the lowest support across the board in our survey was deepfakes of politicians. No more than 38% of respondents in any country expressed approval of political deepfakes. This finding aligns with a surge of legislative activity in both the U.S. and abroad as policymakers rush to regulate the use of AI deepfakes in elections.

    At least 40 U.S. states introduced deepfake-related bills in the 2024 legislative session alone, with more than 50 bills already enacted. China, the EU, and others are all scrambling to pass laws requiring the detection, disclosure, and/or removal of deepfakes. Europe’s AI Act requires platforms to mitigate nebulous and ill-defined “systemic risks to society,” which could lead companies to preemptively remove lawful but controversial speech like deepfakes critical of politicians.

    Although deepfakes can have real-world consequences, First Amendment advocates who have challenged deepfake regulations in the U.S. rightly argue that laws targeting political deepfakes open the door for governments to censor lawful dissent, criticism, or satire of candidates, a vital function of the democratic process. This is not a merely speculative risk.

    An open society cannot thrive if its digital architecture is built to exclude dissent by design.

    The editor of a far-right German media outlet was sentenced to a seven-month suspended prison sentence for sharing a fake meme of the Interior Minister holding a sign that ironically read, “I hate freedom of speech.” For much of 2024, Google restricted Gemini’s ability to generate factual responses about Indian Prime Minister Narendra Modi, after the Indian government accused the company of breaking the law when its chatbot responded that Modi had been “accused of implementing policies some experts characterized as fascist.”

    And despite panic over AI-driven disinformation undermining global elections in 2024, studies from Princetonthe EU, and the Alan Turing Institute found no evidence that a wave of deepfakes affected election results in places like the U.S., Europe, or India.

    People want regulation but don’t trust regulators

    A recent Pew Research Center survey found that nearly six in 10 U.S. adults believed the government would not adequately regulate AI. Our survey confirms these findings on a global scale. In all countries surveyed except Taiwan, at least a plurality supported dual regulation by both governments and tech companies.

    Indeed, a 2023 Pew survey found that 55% of Americans supported government restrictions on false information online, even if it limited free expression. But a 2024 Axios poll found that more Americans fear misinformation from politicians than from AI, foreign governments, or social media. In other words, the public appears willing to empower those they distrust most with policing online and AI misinformation.

    A new FIRE poll, conducted in May 2025, underscores this tension. Although about 47% of respondents said they prioritize protecting free speech in politics, even if that means tolerating some deceptive content, 41% said it’s more important to protect people from misinformation than to protect free speech. Even so, 69% said they were “moderately” to “extremely” concerned that the government might use AI rules to silence criticism of elected officials.

    In a democracy, public opinion matters — and The Future of Free Speech survey suggests that people around the world, including in liberal democracies, favor regulating AI to suppress offensive or controversial content. But democracies are not mere megaphones for majorities. They must still safeguard the very freedoms — like the right to access information, question orthodoxy, and challenge those in power — that make self-government possible.

    We should avoid Preemptive Safetyism

    The dangers of Preemptive Safetyism are most vividly on display in China, where AI tools like DeepSeek must enforce “core socialist values,” avoiding topics like Taiwan, Xinjiang, or Tiananmen, even when released in the West. What looks like a safety net can easily become a dragnet for dissent.

    Speech being generated by a machine does not negate the human right to receive it, especially as those algorithms become central to the very search engines, email clients, and word processors that we use as an interface for the exchange of ideas and information in the digital age.

    The greatest danger to speech often arises not from what is said, but from the fear of what might be said. An open society cannot thrive if its digital architecture is built to exclude dissent by design.

    Source link

  • Voters strongly support prioritizing freedom of speech in potential AI regulation of political messaging, poll finds

    Voters strongly support prioritizing freedom of speech in potential AI regulation of political messaging, poll finds

    • 47% say protecting free speech in politics is the most important priority, even if that lets some deceptive content slip through
    • 28% say government regulation of AI-generated or AI-altered content would make them less likely to share content on social media
    • 81% showed concern about government regulation of election-related AI content being abused to suppress criticism of elected officials

    PHILADELPHIA, June 5, 2025 — Americans strongly believe that lawmakers should prioritize protecting freedom of speech online rather than stopping deceptive content when it comes to potential regulation of artificial intelligence in political messaging, a new national poll of voters finds.

    The survey, conducted by Morning Consult for the Foundation for Individual Rights and Expression, reflects a complicated, or even conflicted, public view of AI: People are wary about artificial intelligence but are uncomfortable with the prospect of allowing government regulators to chill speech, censor criticism and prohibit controversial ideas.

    “This poll reveals that free speech advocates have their work cut out for them when it comes to making our case about the important principles underpinning our First Amendment, and how they apply to AI,” said FIRE Director of Research Ryne Weiss. “Technologies may change, but strong protections for free expression are as critical as ever.” 

    Sixty percent of those surveyed believe sharing AI-generated content is more harmful to the electoral process than government regulation of it. But when asked to choose, more voters (47%) prioritize protecting free speech in politics over stopping deceptive content (37%), regardless of political ideology. Sixty-three percent agree that the right to freedom of speech should be the government’s main priority when making laws that govern the use of AI.

    And 81% are concerned about official rules around election-related AI content being abused to suppress criticism of elected officials. A little more than half are concerned that strict laws making it a crime to publish an AI-generated/AI-altered political video, image, or audio recording would chill or limit criticism about political candidates.

    Voters are evenly split over whether AI is fundamentally different from other forms of speech and thus should be regulated differently. Photoshop and video editing, for example, have been used by political campaigns for many years, and 43% believe the use of AI by political campaigns should be treated the same as the use of older video, audio, and image editing technologies.

    “Handing more authority to government officials will be ripe for abuse and immediately step on critical First Amendment protections,” FIRE Legislative Counsel John Coleman said. “If anything, free expression is the proper antidote to concerns like misinformation, because truth dependably rises above.”

    The poll also found:

    • Two-thirds of those surveyed said it would be unacceptable for someone to use AI to create a realistic political ad that shows a candidate at an event they never actually attended by digitally adding the candidate’s likeness to another person.
    • It would be unacceptable for a political campaign to use any digital software, including AI, to reduce the visibility of wrinkles or blemishes on a candidate’s face in a political ad in order to improve the appearance of the candidate, 39% say, compared to 29% who say that it would be acceptable.
    • 42% agree that AI is a tool that facilitates an individual’s ability to practice their right to freedom of speech.

    The poll was conducted May 13-15, 2025, among a sample of registered voters in the US. A total of 2,005 interviews were conducted online across the US for a margin of error of plus or minus 2 percentage points. Frequency counts may not sum to 2,005 due to weighting and rounding.

    The Foundation for Individual Rights and Expression (FIRE) is a nonpartisan, nonprofit organization dedicated to defending and sustaining the individual rights of all Americans to free speech and free thought — the most essential qualities of liberty. FIRE educates Americans about the importance of these inalienable rights, promotes a culture of respect for these rights, and provides the means to preserve them.

    CONTACT
    Karl de Vries, Director of Media Relations, FIRE: 215-717-3473; [email protected] 

    Source link

  • Risk-based quality regulation – drivers and dynamics in Australian higher education

    Risk-based quality regulation – drivers and dynamics in Australian higher education

    by Joseph David Blacklock, Jeanette Baird and Bjørn Stensaker

    Risk-based’ models for higher education quality regulation have been increasingly popular in higher education globally. At the same time there is limited knowledge of how risk-based regulation can be implemented effectively.

    Australia’s Tertiary Education Quality and Standards Agency (TEQSA) started to implement risk-based regulation in 2011, aiming at an approach balancing regulatory necessity, risk and proportionate regulation. Our recent published study analyses TEQSA’s evolution between 2011 and 2024 to contribute to an emerging body of research on the practice of risk-based regulation in higher education.

    The challenges of risk-based regulation

    Risk-based approaches are seen as a way to create more effective and efficient regulation, targeting resources to the areas or institutions of greatest risk. However, it is widely acknowledged that sector-specificities, political economy and social context exert a significant influence on the practice of risk-based regulation (Black and Baldwin, 2010). Choices made by the regulator also affect its stakeholders and its perceived effectiveness – consider, for example, whose ideas about risk are privileged. Balancing the expectations of these stakeholders, along with their federal mandate, has required much in the way of compromise.

    The evolution of TEQSA’s approaches

    Our study uses a conceptual framework suggested by Hood et al (2001) for comparative analyses of regimes of risk regulation that charts aspects respectively of context and content. With this as a starting point we end up with two theoretical constructs of ‘hyper-regulation’ and ‘dynamic regulation’ as a way to analyse the development of TEQSA over time. These opposing concepts of regulatory approach represent both theoretical and empirical executions of the risk-based model within higher education.

    From extensive document analysis, independent third-party analysis, and Delphi interviews, we identify three phases to TEQSA’s approach:

    • 2011-2013, marked by practices similar to ‘hyper-regulation’, including suspicion of institutions, burdensome requests for information and a perception that there was little ‘risk-based’ discrimination in use
    • 2014-2018, marked by the use of more indicators of ‘dynamic regulation’, including reduced evidence requirements for low-risk providers, sensitivity to the motivational postures of providers (Braithwaite et al. 1994), and more provider self-assurance
    • 2019-2024, marked by a broader approach to the identification of risks, greater attention to systemic risks, and more visible engagement with Federal Government policy, as well as the disruption of the pandemic.

    Across these three periods, we map a series of contextual and content factors to chart those that have remained more constant and those that have varied more widely over time.

    Of course, we do not suggest that TEQSA’s actions fit precisely into these timeframes, nor do we suggest that its actions have been guided by a wholly consistent regulatory philosophy in each phase. After the early and very visible adjustment of TEQSA’s approach, there has been an ongoing series of smaller changes, influenced also by the available resources, the views of successive TEQSA commissioners and the wider higher education landscape as a whole.

    Lessons learned

    Our analysis, building on ideas and perspectives from Hood, Rothstein and Baldwin offers a comparatively simple yet informative taxonomy for future empirical research.

    TEQSA’s start-up phase, in which a hyper-regulatory approach was used, can be linked to a contextual need of the Federal Government at the time to support Australia’s international education industry, leading to the rather dominant judicial framing of its role. However, TEQSA’s initial regulatory stance failed to take account of the largely compliant regulatory posture of the universities that enrol around 90% of higher education students in Australia, and of the strength of this interest group. The new agency was understandably nervous about Government perceptions of its performance, however, a broader initial charting of stakeholder risk perspectives could have provided better guardrails. Similarly, a wider questioning of the sources of risk in TEQSA’s first and second phases could have highlighted more systemic risks.

    A further lesson for new risk-based regulators is to ensure that the regulator itself has a strong understanding of risks in the sector, to guide its analyses, and can readily obtain the data to generate robust risk assessments.

    Our study illustrates that risk-based regulation in practice is as negotiable as any other regulatory instrument. The ebb and flow of TEQSA’s engagement with the Federal Government and other stakeholders provides the context. As predicted by various authors, constant vigilance and regular recalibration are needed by the regulator as the external risk landscape changes and the wider interests of government and stakeholders dictate. The extent to which there is political tolerance for any ‘failure’ of a risk-based regulator is often unstated and always variable.

    Joseph David Blacklock is a graduate of the University of Oslo’s Master’s of Higher Education degree, with a special interest in risk-based regulation and government instruments for managing quality within higher education.

    Jeanette Baird consults on tertiary education quality assurance and strategy in Australia and internationally. She is Adjunct Professor of Higher Education at Divine Word University in Papua New Guinea and an Honorary Senior Fellow of the Centre for the Study of Higher Education at the University of Melbourne.

    Bjørn Stensaker is a professor of higher education at University of Oslo, specializing in studies of policy, reform and change in higher education. He has published widely on these issues in a range of academic journals and other outlets.

    This blog is based on our article in Policy Reviews in Higher Education (online 29 April 2025):

    Blacklock, JD, Baird, J & Stensaker, B (2025) ‘Evolutionary stages in risk-based quality regulation in Australian higher education 2011–2024’ Policy Reviews in Higher Education, 1–23.

    Author: SRHE News Blog

    An international learned society, concerned with supporting research and researchers into Higher Education

    Source link

  • Can we use LEO in regulation?

    Can we use LEO in regulation?

    The Institute for Fiscal Studies answers the last government’s question on earnings data in regulation. David Kernohan reads along

    Source link

  • Don’t let Texas criminalize free political speech in the name of AI regulation

    Don’t let Texas criminalize free political speech in the name of AI regulation

    This essay was originally published by the Austin American-Statesman on May 2, 2025.


    Texans aren’t exactly shy about speaking their minds — whether it’s at city hall, in the town square, or all over social media. But a slate of bills now moving through the Texas Legislature threatens to make that proud tradition a criminal offense.

    In the name of regulating artificial intelligence, lawmakers are proposing bills that could turn political memes, commentary and satire into crimes.

    Senate Bills 893 and 228, and House Bills 366 and 556, might be attempting to protect election integrity, but these bills actually impose sweeping restrictions that could silence ordinary Texans just trying to express their opinions.

    Take SB 893 and its companion HB 2795. These would make it a crime to create and share AI-generated images, audio recordings, or videos if done with the intent to “deceive” and “influence the result of an election.” The bill offers a limited safeguard: If you want to share any images covered by the bill, you must edit them to add a government-mandated warning label.

    But the bills never define what counts as “deceptive,” handing prosecutors a blank check to decide what speech crosses the line. That’s a recipe for selective enforcement and criminalizing unpopular opinions. And SB 893 has already passed the Senate.

    Vague laws and open-ended definitions shouldn’t dictate what Texans can say, how they can say it, or which tools they’re allowed to use.

    HB 366, which just passed the House, goes even further. It would require a disclaimer on any political ad that contains “altered media,” even when the content isn’t misleading. With the provisions applying to anyone spending at least $100 on political advertising, which is easily the amount a person could spend to boost a social media post or to print some flyers, a private citizen could be subject to the law.

    Once this threshold is met, an AI-generated meme, a five-second clip on social media, or a goofy Photoshop that gives the opponent a giant cartoon head would all suddenly need a legal warning label. No exceptions for satire, parody or commentary are included. If it didn’t happen in real life, you’re legally obligated to slap a disclaimer on it.

    HB 556 and SB 228 take a similarly broad approach, treating all generative AI as suspect and criminalizing creative political expression.

    These proposals aren’t just overkill, they’re unconstitutional. Courts have long held that parody, satire and even sharp political attacks are protected speech. Requiring Texans to add disclaimers to their opinions simply because they used modern tools to express them is not transparency. It’s compelled speech.

    Besides, Texas already has laws on the books to address defamation, fraud and election interference. What these bills do is expand government control over how Texans express themselves while turning political expression into a legal minefield.

    Fighting deception at the ballot box shouldn’t mean criminalizing creativity or chilling free speech online. Texans shouldn’t need a lawyer to know whether they can post a meme they made on social media or make a joke about a candidate.

    Political life in Texas has been known to be colorful, rowdy and fiercely independent — and that’s how it should stay. Vague laws and open-ended definitions shouldn’t dictate what Texans can say, how they can say it, or which tools they’re allowed to use.

    The Texas Legislature should scrap these overbroad AI bills and defend the Lone Star state’s real legacy: fearless, unapologetic free speech.

    Source link

  • So now will the government take the chainsaw to HE regulation?

    So now will the government take the chainsaw to HE regulation?

    The Prime Minister recently declared that Britain has ‘too much regulation and too many regulators’ before the shock announcement to abolish the world’s biggest quango, NHS England. Since December, the Government has been fighting a war against red tape, which it believes is hindering economic growth. University Alliance, and I suspect most of the higher education sector, has some sympathy with the PM on this – at least when it comes to higher education regulation. I cannot remember a meeting in the past several years when the burden of regulation was not brought up as a key source of the sector’s woes.

    We need to be clear here that regulating higher education is important. The recent Sunday Times coverage alleging serious fraud in the higher education franchised provision system is testament to that, and it is right that the government and the regulator continue to act robustly. The question, then, is less whether higher education needs regulating at all, but rather whether the right regulators are regulating the right activity in the right way. It should be perfectly possible to have a tough regulator that prevents fraud and acts in the student interest while also reducing duplication in the system and focusing in on the areas of highest risk.

    The sheer volume of external regulatory demand placed upon our sector goes well beyond the well-documented teething problems with our fledgling regulator, the Office for Students (OfS). To outside observer Alex Usher of Canada’s Higher Education Strategy Associates, it appears extreme:

    ‘Canada has no REF, no TEF, no KEF. We have nothing resembling the Office for Students. External quality assurance, where it exists, is so light touch as to be basically invisible. This does not stop us from having four or five universities in the Global top 100, eight in the top 200, and twenty or so in the top 500.’

    The volume of regulatory requirements is even higher for vocationally oriented and professionally accredited provision, which is the lifeblood of Alliance universities. In addition to the OfS, courses which provide access to the so-called ‘regulated professions’  are also overseen by a wide range of Professional, Statutory and Regulatory Bodies (PSRBs), each with their own requirements. PSRBs have wide authority over course content, assessment, and quality assurance, with formal reaccreditation required every three to six years on average.

    In some cases, particularly in the sphere of healthcare education, multiple PSRBs can have some degree of authority over a single course. For example, an undergraduate degree course in Occupational Therapy must meet the requirements of the OfS, the Health and Care Professions Council (HCPC) and the Royal College of Occupational Therapists (RCOT). Often, these different processes and requirements overlap and duplicate one another.

    If this seems excessive, it is nothing compared to the requirements imposed upon degree apprenticeships. Not only are they regulated by the OfS and likely PSRBs given their vocational nature, but they are also subject to the fiendishly complex funding assurance review procedure of the Education and Skills Funding Agency (ESFA)  as well as in-person Ofsted inspections at least every 5 to 6 years that can take up to a week. A recent UA report on healthcare apprenticeships found that this means they are more expensive to deliver than traditional degrees.

    The problem of regulatory burden in higher education has been continually flagged by sector bodies and by the House of Lords Industry and Regulators Committee, which called for a Higher Education Data Reduction Taskforce. Despite this, the issue has been mostly ignored by policymakers, bar a few small initiatives. It does not feature in any of the Government’s higher education reform priorities, although the Education Secretary is asking universities to become more efficient and the OfS expects them to take ‘rapid and decisive action’ to avoid going bust.

    With 72% of higher education providers facing potential deficit by 2025/26,  it is a mystery why the higher education sector – an acknowledged engine of economic growth – appears to have been left out in the cold while this unexpected reprise of the bonfire of the quangos is being lit. To our knowledge, neither the PM nor the Chancellor have called on higher education sector regulators to demand a cut in the cost and burden of regulation as they have done for others.

    Universities are rightfully subject to robust regulation, but the current regime is disproportionate, diverting dwindling resources away from teaching, student services and research. In the absence of more funding, cutting the cost and burden of regulation would go a long way. The establishment of Skills England, with its convening power and wide-angle, long-focus lens, should be used meaningfully to cut bureaucracy for degree apprenticeships while maintaining quality. Responsibility for monitoring the quality of degree apprenticeships should be given back to the OfS rather than Ofsted, and the ESFA audit process should be simplified. The OfS should also make a public commitment to cut the cost and burden of its regulation and work more closely with other sector regulators and PSRBs to avoid overlap and duplication.

    At a time when the Chancellor has urged ‘every regulator, no matter what sector’ to enact a ‘cultural shift’ and tear down the regulatory barriers that are holding back growth, cutting the cost of regulation in higher education should be a top priority.

    Source link

  • Effective regulation requires a degree of trust

    Effective regulation requires a degree of trust

    At one point in my career, I was the CEO of a students’ union who’d been charged with attempting to tackle a culture of initiation ceremonies in sports clubs.

    One day a legal letter appeared on my desk – the jist of which was “you can’t punish these people if they didn’t know the rules”.

    We trawled back through the training and policy statements – and found moments where we’d made clear that not only did we not permit initiation ceremonies, we’d defined them as follows:

    An initiation ceremony is any event at which members of a group are expected to perform an activity as a means of gaining credibility, status or entry into that group. This peer pressure is normally (though not explicitly) exerted on first-year students or new members and may involve the consumption of alcohol, eating various foodstuffs, nudity and other behaviour that may be deemed humiliating or degrading.

    The arguments being advanced were fourfold. The first was that where we had drawn the line between freedom to have fun and harmful behaviour, both in theory and in practice, was wrong.

    The second was that we’d not really enforced anything like this before, and appeared to be wanting to make an example out of a group of students over which a complaint had been raised.

    They said that we’d failed to both engender understanding of where the line was that we were setting for those running sports clubs, and failed to make clear expectations over enforcing that line.

    And given there been no intent to cause harm, it was put to us that the focus on investigations and publishments, rather than support to clubs to organise safe(er) social activity, was both disproportionate and counter-productive.

    And so to the South coast

    I’ve been thinking quite a bit about that affair in the context of the Office for Students (OfS) decision to fine the University of Sussex some £585k over both policy and governance failings identified during its three-year investigation into free speech at Sussex.

    One of the things that you can debate endlessly – and there’s been plenty of it on the site – is where you draw the line between freedom to speak and freedom from harm.

    That’s partly because even if you have an objective of securing an environment characterised by academic freedom and freedom of speech, if you don’t take steps to cause students to feel safe, there can be a silencing effect – which at least in theory there’s quite a bit of evidence on (including inside the Office for Students).

    You can also argue that the “make an example of them” thing is unfair – but ever since a copper stopped me on the M4 doing 85mph one afternoon, I’ve been reminded of the old “you can’t prove your innocence by proving others’ guilt” line.

    Four days after OfS says it “identified reports” about an “incident” at the University of Sussex, then Director of Compliance and Student Protection Susan Lapworth took to the stage at Independent HE’s conference to signal a pivot from registration to enforcement.

    She noted that the statutory framework gave OfS powers to investigate cases where it was concerned about compliance, and to enforce compliance with conditions where it found a breach.

    She signalled that that could include requiring a provider to do something, or not do something, to fix a breach; the imposition of a monetary penalty; the suspension of registration; and the deregistration of a provider if that proved necessary.

    “That all sounds quite fierce”, she said. “But we need to understand which of these enforcement tools work best in which circumstances.” And, perhaps more importantly “what we want to achieve in using them – what’s the purpose of being fierce?”

    The answer was that OfS wanted to create incentives for all providers to comply with their conditions of registration:

    For example, regulators assume that imposing a monetary penalty on one provider will result in all the others taking steps to comply without the regulator needing to get involved.

    That was an “efficient way” to secure compliance across a whole sector, particularly for a regulator like OfS that “deliberately doesn’t re-check compliance for every provider periodically”.

    Even if you agree with the principle, you can argue that it’s pretty much failed at that over the intervening years – which is arguably why the £585k fine has come as so much of a shock.

    But it’s the other two aspects of that initiation thing – the understanding one and the character of interventions one – that I’ve also been thinking about this week in the context of the Sussex fine.

    Multiple roles

    On The Wonkhe Show, Public First’s Jonathon Simons worries about OfS’ multiple roles:

    If the Office for Students is acting in essentially a quasi-judicial capacity, they can’t, under that role, help one of the parties in a case try to resolve things. You can’t employ a judge to try and help you. But if they are also trying to regulate in the student interest, then they absolutely can and should be working with universities to try and help them navigate this – rather than saying, no, we think we know what the answer is, but you just have to keep on revising your policy, and at some point we may or may not tell you got it right.

    It’s a fair point. Too much intervention, and OfS appears compromised when enforcing penalties. Too little, and universities struggle to meet shifting expectations – ultimately to the detriment of students.

    As such, you might argue that OfS ought to draw firmer lines between its advisory and enforcement functions – ensuring institutions receive the necessary support to comply while safeguarding the integrity of its regulatory oversight. At the very least, maybe it should choose who fronts out which bits – rather than its topic style “here’s our Director for X that will both advise and crack down. ”

    But it’s not as if OfS doesn’t routinely combine advice and crack down – its access and participation function does just that. There’s a whole research spin-off dedicated to what works, extensive advice on risks to access and participation and what ought to be in its APPs, and most seem to agree that the character of that team is appropriately balanced in its plan approval and monitoring processes – even if I sometimes worry that poor performance in those plans is routinely going unpunished.

    And that’s not exactly rare. The Regulator’s Code seeks to promote “proportionate, consistent and targeted regulatory activity” through the development of “transparent and effective dialogue and understanding” between regulators and those they regulate. Sussex says that throughout the long investigation, OfS refused to meet in person – confirmed by Arif Ahmed in the press briefing.

    The Code also says that regulators should carry out their activities in a way that “supports those they regulate to comply” – and there’s good reasons for that. The original Code actually came from something called the Hampton Report – in 2004’s Budget, Gordon Brown tasked businessman Philip Hampton with reviewing regulatory inspection and enforcement, and it makes the point about example-setting:

    The penalty regime should aim to have an effective deterrent effect on those contemplating illegal activity. Lower penalties result in weak deterrents, and can even leave businesses with a commercial benefit from illegal activity. Lower penalties also require regulators to carry out more inspection, because there are greater incentives for companies to break the law if they think they can escape the regulator’s attention. Higher penalties can, to some extent, improve compliance and reduce the number of inspections required.”

    But the review also noted that regulators were often slow, could be ineffective in targeting persistent offenders, and that the structure of some regulators, particularly local authorities, made effective action difficult. And some of that was about a failure to use risk-based regulation:

    The 1992 book Responsive Regulation, by Ian Ayres and John Braithwaite, was influential in defining an ‘enforcement pyramid’, up which regulators would progress depending on the seriousness of the regulatory risk, and the non-compliance of the regulated business. Ayres and Braithwaite believed that regulatory compliance was best secured by persuasion in the first instance, with inspection, enforcement notices and penalties being used for more risky businesses further up the pyramid.

    The pyramid game

    Responsive Regulation is a cracking book if you’re into that sort of thing. Its pyramid illustrates how regulators can escalate their responses from persuasion to punitive measures based on the behaviour of the regulated entities:

    In one version of the compliance pyramid, four broad categories of client (called archetypes) are defined by their underlying motivational postures:

    1. The disengaged clients who have decided not to comply,
    2. The resistant clients who don’t want to comply,
    3. The captured clients who try to comply, but don’t always succeed, and
    4. The accommodating clients who are willing to do the right thing.

    Sussex has been saying all week that it’s been either 3 or 4, but does seem to have been treated like it’s 1 or 2.

    As such, Responsive Regulation argues that regulators should aim to balance the encouragement of voluntary compliance with the necessity of enforcement – and of course that balance is one of the central themes emerging in the Sussex case, with VC Sacha Roseneil taking to PoliticsHome to argue that:

    …Our experience reflects closely the [Lords’ Industry and Regulators] committee’s observations that it “gives the impression that it is seeking to punish rather than support providers towards compliance, while taking little note of their views.” The OfS has indeed shown itself to be “arbitrary, overly controlling and unnecessarily combative”, to be failing to deliver value for money and is not focusing on the urgent problem of the financial sustainability of the sector.

    At roughly the same time as the Hampton Report, Richard Macrory – one of the leading environmental lawyers of his generation – was tasked by the Cabinet Office to lead a review on regulatory sanctions covering 60 national regulators, as well as local authorities.

    His key principle was that sanctions should aim to change offender behaviour by ensuring future compliance and potentially altering organisational culture. He also argued they should be responsive and appropriate to the offender and issue, ensure proportionality to the offence and harm caused, and act as a deterrent to discourage future non-compliance.

    To get there, he called for regulators to have a published policy for transparency and consistency, to justify their actions annually, and that the calculation of administrative penalties should be clear.

    These are also emerging as key issues in the Sussex case – Roseneil argues that the fine is “wholly disproportionate” and that OfS abandoned, without any explanation, most of its provisional findings originally communicated in 2014.

    The Macory and Hampton reviews went on to influence the UK Regulatory Enforcement and Sanctions Act 2008, codifying the Ayres and Braithwaite Compliance Pyramid into law via the Regulator’s Code. The current version also includes a duty to ensure clear information, guidance and advice is available to help those they regulate meet their responsibilities to comply – and that’s been on my mind too.

    Knowing the rules and expectations

    The Code says that regulators should provide clear, accessible, and concise guidance using appropriate media and plain language for their audience. It says they should consult those they regulate to ensure guidance meets their needs, and create an environment where regulated entities can seek advice without fear of enforcement.

    It also says that advice should be reliable and aimed at supporting compliance, with mechanisms in place for collaboration between regulators. And where multiple regulators are involved, they should consider each other’s advice and resolve disagreements through discussion.

    That’s partly because Hampton had argued that advice should be a central part of a regulators’ function:

    Advice reduces the risk of non-compliance, and the easier the advice is to access, and the more specific the advice is to the business, the more the risk of non-compliance is reduced.

    Hampton argued that regulatory complexity creates an unmet need for advice:

    Advice is needed because the regulatory environment is so complex, but the very complexity of the regulatory environment can cause business owners to give up on regulations and ‘just do their best’.

    He said that regulators should prioritise advice over inspections:

    The review has some concerns that regulators prioritise inspection over advice. Many of the regulators that spoke to the review saw advice as important, but not as a priority area for funding.”

    And he argued that advice builds trust and compliance without excessive enforcement:

    Staff tend to see their role as securing business compliance in the most effective way possible – an approach the review endorses – and in most cases, this means helping business rather than punishing non-compliance.

    If we cast our minds back to 2011, despite the obvious emerging complexities in freedom from speech, OfS had in fact done very little to offer anything resembling advice – either on the Public Interest Governance Principles at stake in the Sussex case, or on the interrelationship between them and issues of EDI and harassment.

    Back in 2018, a board paper had promised, in partnership with the government and other regulators, an interactive event to encourage better understanding of the regulatory landscape – that would bring leaders in the sector together to “showcase projects and initiatives that are tackling these challenges”, experience “knowledge sharing sessions”, and the opportunity for attendees to “raise and discuss pressing issues with peers from across the sector”.

    The event was eventually held – in not very interactive form – in December 2022.

    Reflecting on a previous Joint Committee on Human Rights report, the board paper said that it was “clear that the complexity created by various forms of guidance and regulation is not serving the student interest”, and that OfS could “facilitate better sharing of best practice whilst keeping itself apprised of emerging issues.”

    I’m not aware of any activity to that end by October 2021 – and even though OfS consulted on draft guidance surrounding the “protect” duty last year, it’s been blocking our FOI attempts to see the guidance it was set to issue when implementation was paused ever since, despite us arguing that it would have been helpful for providers to see how it was interpreting the balancing acts we know are often required when looking at all the legislation and case law.

    The board paper also included a response to the JCHR that said it would be helpful to report on free speech prompted by a change in the risk profile in how free speech is upheld. Nothing to that end appeared by 2021 and still hasn’t unless we count a couple of Arif Ahmed speeches.

    Finally, the paper said that it was “not planning to name and shame providers” where free speech had been suppressed, but would publish regulatory action and the reasons for it where there had been a breach of registration condition E2.

    Either there’s been plenty of less serious interventions without any promised signals to the sector, or for all of the sound and fury about the issue in the media, there really haven’t been any cases to write home about other than Sussex since.

    Willing, but ready and able?

    The point about all of that – at least in this piece – is that it’s actually perfectly OK for a regulator to both advise and judge.

    It isn’t so much to evaluate whether the fine or the process has been fair, and it’s not to suggest that the regulator shouldn’t be deploying the “send an example to promote compliance” tactic.

    But it is to say that it’s obvious that those should be used in a properly risk-based context – and where there’s recognised complexity, the very least it should do is offer clear advice. It’s very hard to see how that function has been fulfilled thus far.

    In the OECD paper Reducing the Risk to Policy Failure: Challenges for Regulatory Compliance, regulation is supposed to be about ensuring that those regulated are ready, willing and able to comply:

    • Ready means clients who know what compliance is – and if there’s a knowledge constraint, there’s a duty to educate and exemplify. It’s not been done.
    • Able means clients who are able to comply – and if there’s a capability constraint, there’s a duty to enable and empower. That’s not been done either.
    • Willing means clients who want to comply – and if there’s an attitudinal constraint, there’s a duty to “engage, encourage [and then] enforce”.

    It’s hard to see how “engage” or “encourage” have been done – either by October 2021 or to date.

    And so it does look like an assumption on the part of the regulator – that providers and SUs arguing complexity have been being disingenuous, and so aren’t willing to secure free speech – is what has led to the record fine in the Sussex case.

    If that’s true, evidence-free assumptions of that sort are what will destroy the sort of trust that underpins effective regulation in the student interest.

    Source link

  • Podcast: Wales cuts, mental health, regulation

    Podcast: Wales cuts, mental health, regulation

    This week on the podcast the Welsh government has announced £18.5m in additional capital funding for universities – but questions remain over reserves, job cuts, competition law and student protection.

    Meanwhile, new research reveals student mental health difficulties have tripled in the past seven years, and Universities UK warns that OfS’ new strategy risks expanding regulatory burden rather than focusing on priorities.

    With Andy Westwood, Professor of Public Policy at the University of Manchester, Emma Maslin, Senior Policy and Research Officer at AMOSSHE, Livia Scott, Partnerships Coordinator at Wonkhe and presented by Jim Dickinson, Associate Editor at Wonkhe.

    Read more

    The government’s in a pickle over fees and funding

    As the cuts rain down in Wales, whatever happened to learner protection?

    Partnership and promises are not incompatible

    Student mental health difficulties are on the rise, and so are inequalities

    Source link