Tag: regulation

  • People want AI regulation — but they don’t trust the regulators

    People want AI regulation — but they don’t trust the regulators

    Generative AI is changing the way we learn, think, discover, and create. Researchers at UC San Diego are using generative AI technology to accelerate climate modeling. Scientists at Harvard Medical School have developed a chatbot that can help diagnose cancers. In BelarusVenezuela, and Russia, political dissidents and embattled journalists have created AI tools to bypass censorship.

    Despite these benefits, a recent global survey from The Future of Free Speech, a think tank where I am the executive director, finds that people around the world support strict guardrails — whether imposed by companies or governments — on the types of content that AI can create.

    These findings were part of a broader survey that ranked 33 countries on overall support for free speech, including on controversial but legal topics. In every country, even high-scoring ones, fewer than half supported AI generating content that, for instance, might offend religious beliefs or insult the national flag — speech that would be protected in most democracies. While some people might find these topics beyond reproach, the ability to question these orthodoxies is a fundamental freedom that underpins free and open societies.

    This tension reflects two competing approaches for how societies should harness AI’s power. The first, “User Empowerment,” sees generative AI as a powerful but neutral tool. Harm lies not in the tool itself, but in how it’s used and by whom. This approach affirms that free expression includes not just the right to speak, but the right to access information across borders and media — a collective good essential to informed choice and democratic life. Laws should prohibit using AI to commit fraud or harassment, not ban AI from discussing controversial political topics.

    The second, “Preemptive Safetyism,” treats some speech as inherently harmful and seeks to block it before it’s even created. While this instinct may seem appealing given the potential for using AI to supercharge harm production, it risks turning AI into a tool of censorship and control, especially in the hands of powerful corporate or political actors.

    As AI becomes an integrated operating system in our everyday life, it is critical that we not cut off access to ideas and information that may challenge us. Otherwise, we risk limiting human creativity and stifling scientific discovery.

    Concerns over AI moderation

    In 2024, The Future of Free Speech analyzed the policies of six major chatbots and tested 268 prompts to see how they handled controversial but legal topics, such as the participation of transgender athletes in women’s sports and the “lab-leak” theory. We found that chatbots refused to generate content for more than 40% of prompts. This year, we repeated our tests and found that refusal rates dropped significantly to about 25% of the time.

    Despite these positive developments, our survey’s findings indicate that people are comfortable with companies and governments erecting strict guardrails on what their AI chatbots can generate, which may result in large-scale government-mandated corporate control of users’ access to information and ideas.

    Overwhelming opposition to political deepfakes

    Unsurprisingly, the category of AI content that received the lowest support across the board in our survey was deepfakes of politicians. No more than 38% of respondents in any country expressed approval of political deepfakes. This finding aligns with a surge of legislative activity in both the U.S. and abroad as policymakers rush to regulate the use of AI deepfakes in elections.

    At least 40 U.S. states introduced deepfake-related bills in the 2024 legislative session alone, with more than 50 bills already enacted. China, the EU, and others are all scrambling to pass laws requiring the detection, disclosure, and/or removal of deepfakes. Europe’s AI Act requires platforms to mitigate nebulous and ill-defined “systemic risks to society,” which could lead companies to preemptively remove lawful but controversial speech like deepfakes critical of politicians.

    Although deepfakes can have real-world consequences, First Amendment advocates who have challenged deepfake regulations in the U.S. rightly argue that laws targeting political deepfakes open the door for governments to censor lawful dissent, criticism, or satire of candidates, a vital function of the democratic process. This is not a merely speculative risk.

    An open society cannot thrive if its digital architecture is built to exclude dissent by design.

    The editor of a far-right German media outlet was sentenced to a seven-month suspended prison sentence for sharing a fake meme of the Interior Minister holding a sign that ironically read, “I hate freedom of speech.” For much of 2024, Google restricted Gemini’s ability to generate factual responses about Indian Prime Minister Narendra Modi, after the Indian government accused the company of breaking the law when its chatbot responded that Modi had been “accused of implementing policies some experts characterized as fascist.”

    And despite panic over AI-driven disinformation undermining global elections in 2024, studies from Princetonthe EU, and the Alan Turing Institute found no evidence that a wave of deepfakes affected election results in places like the U.S., Europe, or India.

    People want regulation but don’t trust regulators

    A recent Pew Research Center survey found that nearly six in 10 U.S. adults believed the government would not adequately regulate AI. Our survey confirms these findings on a global scale. In all countries surveyed except Taiwan, at least a plurality supported dual regulation by both governments and tech companies.

    Indeed, a 2023 Pew survey found that 55% of Americans supported government restrictions on false information online, even if it limited free expression. But a 2024 Axios poll found that more Americans fear misinformation from politicians than from AI, foreign governments, or social media. In other words, the public appears willing to empower those they distrust most with policing online and AI misinformation.

    A new FIRE poll, conducted in May 2025, underscores this tension. Although about 47% of respondents said they prioritize protecting free speech in politics, even if that means tolerating some deceptive content, 41% said it’s more important to protect people from misinformation than to protect free speech. Even so, 69% said they were “moderately” to “extremely” concerned that the government might use AI rules to silence criticism of elected officials.

    In a democracy, public opinion matters — and The Future of Free Speech survey suggests that people around the world, including in liberal democracies, favor regulating AI to suppress offensive or controversial content. But democracies are not mere megaphones for majorities. They must still safeguard the very freedoms — like the right to access information, question orthodoxy, and challenge those in power — that make self-government possible.

    We should avoid Preemptive Safetyism

    The dangers of Preemptive Safetyism are most vividly on display in China, where AI tools like DeepSeek must enforce “core socialist values,” avoiding topics like Taiwan, Xinjiang, or Tiananmen, even when released in the West. What looks like a safety net can easily become a dragnet for dissent.

    Speech being generated by a machine does not negate the human right to receive it, especially as those algorithms become central to the very search engines, email clients, and word processors that we use as an interface for the exchange of ideas and information in the digital age.

    The greatest danger to speech often arises not from what is said, but from the fear of what might be said. An open society cannot thrive if its digital architecture is built to exclude dissent by design.

    Source link

  • Voters strongly support prioritizing freedom of speech in potential AI regulation of political messaging, poll finds

    Voters strongly support prioritizing freedom of speech in potential AI regulation of political messaging, poll finds

    • 47% say protecting free speech in politics is the most important priority, even if that lets some deceptive content slip through
    • 28% say government regulation of AI-generated or AI-altered content would make them less likely to share content on social media
    • 81% showed concern about government regulation of election-related AI content being abused to suppress criticism of elected officials

    PHILADELPHIA, June 5, 2025 — Americans strongly believe that lawmakers should prioritize protecting freedom of speech online rather than stopping deceptive content when it comes to potential regulation of artificial intelligence in political messaging, a new national poll of voters finds.

    The survey, conducted by Morning Consult for the Foundation for Individual Rights and Expression, reflects a complicated, or even conflicted, public view of AI: People are wary about artificial intelligence but are uncomfortable with the prospect of allowing government regulators to chill speech, censor criticism and prohibit controversial ideas.

    “This poll reveals that free speech advocates have their work cut out for them when it comes to making our case about the important principles underpinning our First Amendment, and how they apply to AI,” said FIRE Director of Research Ryne Weiss. “Technologies may change, but strong protections for free expression are as critical as ever.” 

    Sixty percent of those surveyed believe sharing AI-generated content is more harmful to the electoral process than government regulation of it. But when asked to choose, more voters (47%) prioritize protecting free speech in politics over stopping deceptive content (37%), regardless of political ideology. Sixty-three percent agree that the right to freedom of speech should be the government’s main priority when making laws that govern the use of AI.

    And 81% are concerned about official rules around election-related AI content being abused to suppress criticism of elected officials. A little more than half are concerned that strict laws making it a crime to publish an AI-generated/AI-altered political video, image, or audio recording would chill or limit criticism about political candidates.

    Voters are evenly split over whether AI is fundamentally different from other forms of speech and thus should be regulated differently. Photoshop and video editing, for example, have been used by political campaigns for many years, and 43% believe the use of AI by political campaigns should be treated the same as the use of older video, audio, and image editing technologies.

    “Handing more authority to government officials will be ripe for abuse and immediately step on critical First Amendment protections,” FIRE Legislative Counsel John Coleman said. “If anything, free expression is the proper antidote to concerns like misinformation, because truth dependably rises above.”

    The poll also found:

    • Two-thirds of those surveyed said it would be unacceptable for someone to use AI to create a realistic political ad that shows a candidate at an event they never actually attended by digitally adding the candidate’s likeness to another person.
    • It would be unacceptable for a political campaign to use any digital software, including AI, to reduce the visibility of wrinkles or blemishes on a candidate’s face in a political ad in order to improve the appearance of the candidate, 39% say, compared to 29% who say that it would be acceptable.
    • 42% agree that AI is a tool that facilitates an individual’s ability to practice their right to freedom of speech.

    The poll was conducted May 13-15, 2025, among a sample of registered voters in the US. A total of 2,005 interviews were conducted online across the US for a margin of error of plus or minus 2 percentage points. Frequency counts may not sum to 2,005 due to weighting and rounding.

    The Foundation for Individual Rights and Expression (FIRE) is a nonpartisan, nonprofit organization dedicated to defending and sustaining the individual rights of all Americans to free speech and free thought — the most essential qualities of liberty. FIRE educates Americans about the importance of these inalienable rights, promotes a culture of respect for these rights, and provides the means to preserve them.

    CONTACT
    Karl de Vries, Director of Media Relations, FIRE: 215-717-3473; [email protected] 

    Source link

  • Risk-based quality regulation – drivers and dynamics in Australian higher education

    Risk-based quality regulation – drivers and dynamics in Australian higher education

    by Joseph David Blacklock, Jeanette Baird and Bjørn Stensaker

    Risk-based’ models for higher education quality regulation have been increasingly popular in higher education globally. At the same time there is limited knowledge of how risk-based regulation can be implemented effectively.

    Australia’s Tertiary Education Quality and Standards Agency (TEQSA) started to implement risk-based regulation in 2011, aiming at an approach balancing regulatory necessity, risk and proportionate regulation. Our recent published study analyses TEQSA’s evolution between 2011 and 2024 to contribute to an emerging body of research on the practice of risk-based regulation in higher education.

    The challenges of risk-based regulation

    Risk-based approaches are seen as a way to create more effective and efficient regulation, targeting resources to the areas or institutions of greatest risk. However, it is widely acknowledged that sector-specificities, political economy and social context exert a significant influence on the practice of risk-based regulation (Black and Baldwin, 2010). Choices made by the regulator also affect its stakeholders and its perceived effectiveness – consider, for example, whose ideas about risk are privileged. Balancing the expectations of these stakeholders, along with their federal mandate, has required much in the way of compromise.

    The evolution of TEQSA’s approaches

    Our study uses a conceptual framework suggested by Hood et al (2001) for comparative analyses of regimes of risk regulation that charts aspects respectively of context and content. With this as a starting point we end up with two theoretical constructs of ‘hyper-regulation’ and ‘dynamic regulation’ as a way to analyse the development of TEQSA over time. These opposing concepts of regulatory approach represent both theoretical and empirical executions of the risk-based model within higher education.

    From extensive document analysis, independent third-party analysis, and Delphi interviews, we identify three phases to TEQSA’s approach:

    • 2011-2013, marked by practices similar to ‘hyper-regulation’, including suspicion of institutions, burdensome requests for information and a perception that there was little ‘risk-based’ discrimination in use
    • 2014-2018, marked by the use of more indicators of ‘dynamic regulation’, including reduced evidence requirements for low-risk providers, sensitivity to the motivational postures of providers (Braithwaite et al. 1994), and more provider self-assurance
    • 2019-2024, marked by a broader approach to the identification of risks, greater attention to systemic risks, and more visible engagement with Federal Government policy, as well as the disruption of the pandemic.

    Across these three periods, we map a series of contextual and content factors to chart those that have remained more constant and those that have varied more widely over time.

    Of course, we do not suggest that TEQSA’s actions fit precisely into these timeframes, nor do we suggest that its actions have been guided by a wholly consistent regulatory philosophy in each phase. After the early and very visible adjustment of TEQSA’s approach, there has been an ongoing series of smaller changes, influenced also by the available resources, the views of successive TEQSA commissioners and the wider higher education landscape as a whole.

    Lessons learned

    Our analysis, building on ideas and perspectives from Hood, Rothstein and Baldwin offers a comparatively simple yet informative taxonomy for future empirical research.

    TEQSA’s start-up phase, in which a hyper-regulatory approach was used, can be linked to a contextual need of the Federal Government at the time to support Australia’s international education industry, leading to the rather dominant judicial framing of its role. However, TEQSA’s initial regulatory stance failed to take account of the largely compliant regulatory posture of the universities that enrol around 90% of higher education students in Australia, and of the strength of this interest group. The new agency was understandably nervous about Government perceptions of its performance, however, a broader initial charting of stakeholder risk perspectives could have provided better guardrails. Similarly, a wider questioning of the sources of risk in TEQSA’s first and second phases could have highlighted more systemic risks.

    A further lesson for new risk-based regulators is to ensure that the regulator itself has a strong understanding of risks in the sector, to guide its analyses, and can readily obtain the data to generate robust risk assessments.

    Our study illustrates that risk-based regulation in practice is as negotiable as any other regulatory instrument. The ebb and flow of TEQSA’s engagement with the Federal Government and other stakeholders provides the context. As predicted by various authors, constant vigilance and regular recalibration are needed by the regulator as the external risk landscape changes and the wider interests of government and stakeholders dictate. The extent to which there is political tolerance for any ‘failure’ of a risk-based regulator is often unstated and always variable.

    Joseph David Blacklock is a graduate of the University of Oslo’s Master’s of Higher Education degree, with a special interest in risk-based regulation and government instruments for managing quality within higher education.

    Jeanette Baird consults on tertiary education quality assurance and strategy in Australia and internationally. She is Adjunct Professor of Higher Education at Divine Word University in Papua New Guinea and an Honorary Senior Fellow of the Centre for the Study of Higher Education at the University of Melbourne.

    Bjørn Stensaker is a professor of higher education at University of Oslo, specializing in studies of policy, reform and change in higher education. He has published widely on these issues in a range of academic journals and other outlets.

    This blog is based on our article in Policy Reviews in Higher Education (online 29 April 2025):

    Blacklock, JD, Baird, J & Stensaker, B (2025) ‘Evolutionary stages in risk-based quality regulation in Australian higher education 2011–2024’ Policy Reviews in Higher Education, 1–23.

    Author: SRHE News Blog

    An international learned society, concerned with supporting research and researchers into Higher Education

    Source link

  • Can we use LEO in regulation?

    Can we use LEO in regulation?

    The Institute for Fiscal Studies answers the last government’s question on earnings data in regulation. David Kernohan reads along

    Source link

  • Don’t let Texas criminalize free political speech in the name of AI regulation

    Don’t let Texas criminalize free political speech in the name of AI regulation

    This essay was originally published by the Austin American-Statesman on May 2, 2025.


    Texans aren’t exactly shy about speaking their minds — whether it’s at city hall, in the town square, or all over social media. But a slate of bills now moving through the Texas Legislature threatens to make that proud tradition a criminal offense.

    In the name of regulating artificial intelligence, lawmakers are proposing bills that could turn political memes, commentary and satire into crimes.

    Senate Bills 893 and 228, and House Bills 366 and 556, might be attempting to protect election integrity, but these bills actually impose sweeping restrictions that could silence ordinary Texans just trying to express their opinions.

    Take SB 893 and its companion HB 2795. These would make it a crime to create and share AI-generated images, audio recordings, or videos if done with the intent to “deceive” and “influence the result of an election.” The bill offers a limited safeguard: If you want to share any images covered by the bill, you must edit them to add a government-mandated warning label.

    But the bills never define what counts as “deceptive,” handing prosecutors a blank check to decide what speech crosses the line. That’s a recipe for selective enforcement and criminalizing unpopular opinions. And SB 893 has already passed the Senate.

    Vague laws and open-ended definitions shouldn’t dictate what Texans can say, how they can say it, or which tools they’re allowed to use.

    HB 366, which just passed the House, goes even further. It would require a disclaimer on any political ad that contains “altered media,” even when the content isn’t misleading. With the provisions applying to anyone spending at least $100 on political advertising, which is easily the amount a person could spend to boost a social media post or to print some flyers, a private citizen could be subject to the law.

    Once this threshold is met, an AI-generated meme, a five-second clip on social media, or a goofy Photoshop that gives the opponent a giant cartoon head would all suddenly need a legal warning label. No exceptions for satire, parody or commentary are included. If it didn’t happen in real life, you’re legally obligated to slap a disclaimer on it.

    HB 556 and SB 228 take a similarly broad approach, treating all generative AI as suspect and criminalizing creative political expression.

    These proposals aren’t just overkill, they’re unconstitutional. Courts have long held that parody, satire and even sharp political attacks are protected speech. Requiring Texans to add disclaimers to their opinions simply because they used modern tools to express them is not transparency. It’s compelled speech.

    Besides, Texas already has laws on the books to address defamation, fraud and election interference. What these bills do is expand government control over how Texans express themselves while turning political expression into a legal minefield.

    Fighting deception at the ballot box shouldn’t mean criminalizing creativity or chilling free speech online. Texans shouldn’t need a lawyer to know whether they can post a meme they made on social media or make a joke about a candidate.

    Political life in Texas has been known to be colorful, rowdy and fiercely independent — and that’s how it should stay. Vague laws and open-ended definitions shouldn’t dictate what Texans can say, how they can say it, or which tools they’re allowed to use.

    The Texas Legislature should scrap these overbroad AI bills and defend the Lone Star state’s real legacy: fearless, unapologetic free speech.

    Source link

  • So now will the government take the chainsaw to HE regulation?

    So now will the government take the chainsaw to HE regulation?

    The Prime Minister recently declared that Britain has ‘too much regulation and too many regulators’ before the shock announcement to abolish the world’s biggest quango, NHS England. Since December, the Government has been fighting a war against red tape, which it believes is hindering economic growth. University Alliance, and I suspect most of the higher education sector, has some sympathy with the PM on this – at least when it comes to higher education regulation. I cannot remember a meeting in the past several years when the burden of regulation was not brought up as a key source of the sector’s woes.

    We need to be clear here that regulating higher education is important. The recent Sunday Times coverage alleging serious fraud in the higher education franchised provision system is testament to that, and it is right that the government and the regulator continue to act robustly. The question, then, is less whether higher education needs regulating at all, but rather whether the right regulators are regulating the right activity in the right way. It should be perfectly possible to have a tough regulator that prevents fraud and acts in the student interest while also reducing duplication in the system and focusing in on the areas of highest risk.

    The sheer volume of external regulatory demand placed upon our sector goes well beyond the well-documented teething problems with our fledgling regulator, the Office for Students (OfS). To outside observer Alex Usher of Canada’s Higher Education Strategy Associates, it appears extreme:

    ‘Canada has no REF, no TEF, no KEF. We have nothing resembling the Office for Students. External quality assurance, where it exists, is so light touch as to be basically invisible. This does not stop us from having four or five universities in the Global top 100, eight in the top 200, and twenty or so in the top 500.’

    The volume of regulatory requirements is even higher for vocationally oriented and professionally accredited provision, which is the lifeblood of Alliance universities. In addition to the OfS, courses which provide access to the so-called ‘regulated professions’  are also overseen by a wide range of Professional, Statutory and Regulatory Bodies (PSRBs), each with their own requirements. PSRBs have wide authority over course content, assessment, and quality assurance, with formal reaccreditation required every three to six years on average.

    In some cases, particularly in the sphere of healthcare education, multiple PSRBs can have some degree of authority over a single course. For example, an undergraduate degree course in Occupational Therapy must meet the requirements of the OfS, the Health and Care Professions Council (HCPC) and the Royal College of Occupational Therapists (RCOT). Often, these different processes and requirements overlap and duplicate one another.

    If this seems excessive, it is nothing compared to the requirements imposed upon degree apprenticeships. Not only are they regulated by the OfS and likely PSRBs given their vocational nature, but they are also subject to the fiendishly complex funding assurance review procedure of the Education and Skills Funding Agency (ESFA)  as well as in-person Ofsted inspections at least every 5 to 6 years that can take up to a week. A recent UA report on healthcare apprenticeships found that this means they are more expensive to deliver than traditional degrees.

    The problem of regulatory burden in higher education has been continually flagged by sector bodies and by the House of Lords Industry and Regulators Committee, which called for a Higher Education Data Reduction Taskforce. Despite this, the issue has been mostly ignored by policymakers, bar a few small initiatives. It does not feature in any of the Government’s higher education reform priorities, although the Education Secretary is asking universities to become more efficient and the OfS expects them to take ‘rapid and decisive action’ to avoid going bust.

    With 72% of higher education providers facing potential deficit by 2025/26,  it is a mystery why the higher education sector – an acknowledged engine of economic growth – appears to have been left out in the cold while this unexpected reprise of the bonfire of the quangos is being lit. To our knowledge, neither the PM nor the Chancellor have called on higher education sector regulators to demand a cut in the cost and burden of regulation as they have done for others.

    Universities are rightfully subject to robust regulation, but the current regime is disproportionate, diverting dwindling resources away from teaching, student services and research. In the absence of more funding, cutting the cost and burden of regulation would go a long way. The establishment of Skills England, with its convening power and wide-angle, long-focus lens, should be used meaningfully to cut bureaucracy for degree apprenticeships while maintaining quality. Responsibility for monitoring the quality of degree apprenticeships should be given back to the OfS rather than Ofsted, and the ESFA audit process should be simplified. The OfS should also make a public commitment to cut the cost and burden of its regulation and work more closely with other sector regulators and PSRBs to avoid overlap and duplication.

    At a time when the Chancellor has urged ‘every regulator, no matter what sector’ to enact a ‘cultural shift’ and tear down the regulatory barriers that are holding back growth, cutting the cost of regulation in higher education should be a top priority.

    Source link

  • Effective regulation requires a degree of trust

    Effective regulation requires a degree of trust

    At one point in my career, I was the CEO of a students’ union who’d been charged with attempting to tackle a culture of initiation ceremonies in sports clubs.

    One day a legal letter appeared on my desk – the jist of which was “you can’t punish these people if they didn’t know the rules”.

    We trawled back through the training and policy statements – and found moments where we’d made clear that not only did we not permit initiation ceremonies, we’d defined them as follows:

    An initiation ceremony is any event at which members of a group are expected to perform an activity as a means of gaining credibility, status or entry into that group. This peer pressure is normally (though not explicitly) exerted on first-year students or new members and may involve the consumption of alcohol, eating various foodstuffs, nudity and other behaviour that may be deemed humiliating or degrading.

    The arguments being advanced were fourfold. The first was that where we had drawn the line between freedom to have fun and harmful behaviour, both in theory and in practice, was wrong.

    The second was that we’d not really enforced anything like this before, and appeared to be wanting to make an example out of a group of students over which a complaint had been raised.

    They said that we’d failed to both engender understanding of where the line was that we were setting for those running sports clubs, and failed to make clear expectations over enforcing that line.

    And given there been no intent to cause harm, it was put to us that the focus on investigations and publishments, rather than support to clubs to organise safe(er) social activity, was both disproportionate and counter-productive.

    And so to the South coast

    I’ve been thinking quite a bit about that affair in the context of the Office for Students (OfS) decision to fine the University of Sussex some £585k over both policy and governance failings identified during its three-year investigation into free speech at Sussex.

    One of the things that you can debate endlessly – and there’s been plenty of it on the site – is where you draw the line between freedom to speak and freedom from harm.

    That’s partly because even if you have an objective of securing an environment characterised by academic freedom and freedom of speech, if you don’t take steps to cause students to feel safe, there can be a silencing effect – which at least in theory there’s quite a bit of evidence on (including inside the Office for Students).

    You can also argue that the “make an example of them” thing is unfair – but ever since a copper stopped me on the M4 doing 85mph one afternoon, I’ve been reminded of the old “you can’t prove your innocence by proving others’ guilt” line.

    Four days after OfS says it “identified reports” about an “incident” at the University of Sussex, then Director of Compliance and Student Protection Susan Lapworth took to the stage at Independent HE’s conference to signal a pivot from registration to enforcement.

    She noted that the statutory framework gave OfS powers to investigate cases where it was concerned about compliance, and to enforce compliance with conditions where it found a breach.

    She signalled that that could include requiring a provider to do something, or not do something, to fix a breach; the imposition of a monetary penalty; the suspension of registration; and the deregistration of a provider if that proved necessary.

    “That all sounds quite fierce”, she said. “But we need to understand which of these enforcement tools work best in which circumstances.” And, perhaps more importantly “what we want to achieve in using them – what’s the purpose of being fierce?”

    The answer was that OfS wanted to create incentives for all providers to comply with their conditions of registration:

    For example, regulators assume that imposing a monetary penalty on one provider will result in all the others taking steps to comply without the regulator needing to get involved.

    That was an “efficient way” to secure compliance across a whole sector, particularly for a regulator like OfS that “deliberately doesn’t re-check compliance for every provider periodically”.

    Even if you agree with the principle, you can argue that it’s pretty much failed at that over the intervening years – which is arguably why the £585k fine has come as so much of a shock.

    But it’s the other two aspects of that initiation thing – the understanding one and the character of interventions one – that I’ve also been thinking about this week in the context of the Sussex fine.

    Multiple roles

    On The Wonkhe Show, Public First’s Jonathon Simons worries about OfS’ multiple roles:

    If the Office for Students is acting in essentially a quasi-judicial capacity, they can’t, under that role, help one of the parties in a case try to resolve things. You can’t employ a judge to try and help you. But if they are also trying to regulate in the student interest, then they absolutely can and should be working with universities to try and help them navigate this – rather than saying, no, we think we know what the answer is, but you just have to keep on revising your policy, and at some point we may or may not tell you got it right.

    It’s a fair point. Too much intervention, and OfS appears compromised when enforcing penalties. Too little, and universities struggle to meet shifting expectations – ultimately to the detriment of students.

    As such, you might argue that OfS ought to draw firmer lines between its advisory and enforcement functions – ensuring institutions receive the necessary support to comply while safeguarding the integrity of its regulatory oversight. At the very least, maybe it should choose who fronts out which bits – rather than its topic style “here’s our Director for X that will both advise and crack down. ”

    But it’s not as if OfS doesn’t routinely combine advice and crack down – its access and participation function does just that. There’s a whole research spin-off dedicated to what works, extensive advice on risks to access and participation and what ought to be in its APPs, and most seem to agree that the character of that team is appropriately balanced in its plan approval and monitoring processes – even if I sometimes worry that poor performance in those plans is routinely going unpunished.

    And that’s not exactly rare. The Regulator’s Code seeks to promote “proportionate, consistent and targeted regulatory activity” through the development of “transparent and effective dialogue and understanding” between regulators and those they regulate. Sussex says that throughout the long investigation, OfS refused to meet in person – confirmed by Arif Ahmed in the press briefing.

    The Code also says that regulators should carry out their activities in a way that “supports those they regulate to comply” – and there’s good reasons for that. The original Code actually came from something called the Hampton Report – in 2004’s Budget, Gordon Brown tasked businessman Philip Hampton with reviewing regulatory inspection and enforcement, and it makes the point about example-setting:

    The penalty regime should aim to have an effective deterrent effect on those contemplating illegal activity. Lower penalties result in weak deterrents, and can even leave businesses with a commercial benefit from illegal activity. Lower penalties also require regulators to carry out more inspection, because there are greater incentives for companies to break the law if they think they can escape the regulator’s attention. Higher penalties can, to some extent, improve compliance and reduce the number of inspections required.”

    But the review also noted that regulators were often slow, could be ineffective in targeting persistent offenders, and that the structure of some regulators, particularly local authorities, made effective action difficult. And some of that was about a failure to use risk-based regulation:

    The 1992 book Responsive Regulation, by Ian Ayres and John Braithwaite, was influential in defining an ‘enforcement pyramid’, up which regulators would progress depending on the seriousness of the regulatory risk, and the non-compliance of the regulated business. Ayres and Braithwaite believed that regulatory compliance was best secured by persuasion in the first instance, with inspection, enforcement notices and penalties being used for more risky businesses further up the pyramid.

    The pyramid game

    Responsive Regulation is a cracking book if you’re into that sort of thing. Its pyramid illustrates how regulators can escalate their responses from persuasion to punitive measures based on the behaviour of the regulated entities:

    In one version of the compliance pyramid, four broad categories of client (called archetypes) are defined by their underlying motivational postures:

    1. The disengaged clients who have decided not to comply,
    2. The resistant clients who don’t want to comply,
    3. The captured clients who try to comply, but don’t always succeed, and
    4. The accommodating clients who are willing to do the right thing.

    Sussex has been saying all week that it’s been either 3 or 4, but does seem to have been treated like it’s 1 or 2.

    As such, Responsive Regulation argues that regulators should aim to balance the encouragement of voluntary compliance with the necessity of enforcement – and of course that balance is one of the central themes emerging in the Sussex case, with VC Sacha Roseneil taking to PoliticsHome to argue that:

    …Our experience reflects closely the [Lords’ Industry and Regulators] committee’s observations that it “gives the impression that it is seeking to punish rather than support providers towards compliance, while taking little note of their views.” The OfS has indeed shown itself to be “arbitrary, overly controlling and unnecessarily combative”, to be failing to deliver value for money and is not focusing on the urgent problem of the financial sustainability of the sector.

    At roughly the same time as the Hampton Report, Richard Macrory – one of the leading environmental lawyers of his generation – was tasked by the Cabinet Office to lead a review on regulatory sanctions covering 60 national regulators, as well as local authorities.

    His key principle was that sanctions should aim to change offender behaviour by ensuring future compliance and potentially altering organisational culture. He also argued they should be responsive and appropriate to the offender and issue, ensure proportionality to the offence and harm caused, and act as a deterrent to discourage future non-compliance.

    To get there, he called for regulators to have a published policy for transparency and consistency, to justify their actions annually, and that the calculation of administrative penalties should be clear.

    These are also emerging as key issues in the Sussex case – Roseneil argues that the fine is “wholly disproportionate” and that OfS abandoned, without any explanation, most of its provisional findings originally communicated in 2014.

    The Macory and Hampton reviews went on to influence the UK Regulatory Enforcement and Sanctions Act 2008, codifying the Ayres and Braithwaite Compliance Pyramid into law via the Regulator’s Code. The current version also includes a duty to ensure clear information, guidance and advice is available to help those they regulate meet their responsibilities to comply – and that’s been on my mind too.

    Knowing the rules and expectations

    The Code says that regulators should provide clear, accessible, and concise guidance using appropriate media and plain language for their audience. It says they should consult those they regulate to ensure guidance meets their needs, and create an environment where regulated entities can seek advice without fear of enforcement.

    It also says that advice should be reliable and aimed at supporting compliance, with mechanisms in place for collaboration between regulators. And where multiple regulators are involved, they should consider each other’s advice and resolve disagreements through discussion.

    That’s partly because Hampton had argued that advice should be a central part of a regulators’ function:

    Advice reduces the risk of non-compliance, and the easier the advice is to access, and the more specific the advice is to the business, the more the risk of non-compliance is reduced.

    Hampton argued that regulatory complexity creates an unmet need for advice:

    Advice is needed because the regulatory environment is so complex, but the very complexity of the regulatory environment can cause business owners to give up on regulations and ‘just do their best’.

    He said that regulators should prioritise advice over inspections:

    The review has some concerns that regulators prioritise inspection over advice. Many of the regulators that spoke to the review saw advice as important, but not as a priority area for funding.”

    And he argued that advice builds trust and compliance without excessive enforcement:

    Staff tend to see their role as securing business compliance in the most effective way possible – an approach the review endorses – and in most cases, this means helping business rather than punishing non-compliance.

    If we cast our minds back to 2011, despite the obvious emerging complexities in freedom from speech, OfS had in fact done very little to offer anything resembling advice – either on the Public Interest Governance Principles at stake in the Sussex case, or on the interrelationship between them and issues of EDI and harassment.

    Back in 2018, a board paper had promised, in partnership with the government and other regulators, an interactive event to encourage better understanding of the regulatory landscape – that would bring leaders in the sector together to “showcase projects and initiatives that are tackling these challenges”, experience “knowledge sharing sessions”, and the opportunity for attendees to “raise and discuss pressing issues with peers from across the sector”.

    The event was eventually held – in not very interactive form – in December 2022.

    Reflecting on a previous Joint Committee on Human Rights report, the board paper said that it was “clear that the complexity created by various forms of guidance and regulation is not serving the student interest”, and that OfS could “facilitate better sharing of best practice whilst keeping itself apprised of emerging issues.”

    I’m not aware of any activity to that end by October 2021 – and even though OfS consulted on draft guidance surrounding the “protect” duty last year, it’s been blocking our FOI attempts to see the guidance it was set to issue when implementation was paused ever since, despite us arguing that it would have been helpful for providers to see how it was interpreting the balancing acts we know are often required when looking at all the legislation and case law.

    The board paper also included a response to the JCHR that said it would be helpful to report on free speech prompted by a change in the risk profile in how free speech is upheld. Nothing to that end appeared by 2021 and still hasn’t unless we count a couple of Arif Ahmed speeches.

    Finally, the paper said that it was “not planning to name and shame providers” where free speech had been suppressed, but would publish regulatory action and the reasons for it where there had been a breach of registration condition E2.

    Either there’s been plenty of less serious interventions without any promised signals to the sector, or for all of the sound and fury about the issue in the media, there really haven’t been any cases to write home about other than Sussex since.

    Willing, but ready and able?

    The point about all of that – at least in this piece – is that it’s actually perfectly OK for a regulator to both advise and judge.

    It isn’t so much to evaluate whether the fine or the process has been fair, and it’s not to suggest that the regulator shouldn’t be deploying the “send an example to promote compliance” tactic.

    But it is to say that it’s obvious that those should be used in a properly risk-based context – and where there’s recognised complexity, the very least it should do is offer clear advice. It’s very hard to see how that function has been fulfilled thus far.

    In the OECD paper Reducing the Risk to Policy Failure: Challenges for Regulatory Compliance, regulation is supposed to be about ensuring that those regulated are ready, willing and able to comply:

    • Ready means clients who know what compliance is – and if there’s a knowledge constraint, there’s a duty to educate and exemplify. It’s not been done.
    • Able means clients who are able to comply – and if there’s a capability constraint, there’s a duty to enable and empower. That’s not been done either.
    • Willing means clients who want to comply – and if there’s an attitudinal constraint, there’s a duty to “engage, encourage [and then] enforce”.

    It’s hard to see how “engage” or “encourage” have been done – either by October 2021 or to date.

    And so it does look like an assumption on the part of the regulator – that providers and SUs arguing complexity have been being disingenuous, and so aren’t willing to secure free speech – is what has led to the record fine in the Sussex case.

    If that’s true, evidence-free assumptions of that sort are what will destroy the sort of trust that underpins effective regulation in the student interest.

    Source link

  • Podcast: Wales cuts, mental health, regulation

    Podcast: Wales cuts, mental health, regulation

    This week on the podcast the Welsh government has announced £18.5m in additional capital funding for universities – but questions remain over reserves, job cuts, competition law and student protection.

    Meanwhile, new research reveals student mental health difficulties have tripled in the past seven years, and Universities UK warns that OfS’ new strategy risks expanding regulatory burden rather than focusing on priorities.

    With Andy Westwood, Professor of Public Policy at the University of Manchester, Emma Maslin, Senior Policy and Research Officer at AMOSSHE, Livia Scott, Partnerships Coordinator at Wonkhe and presented by Jim Dickinson, Associate Editor at Wonkhe.

    Read more

    The government’s in a pickle over fees and funding

    As the cuts rain down in Wales, whatever happened to learner protection?

    Partnership and promises are not incompatible

    Student mental health difficulties are on the rise, and so are inequalities

    Source link

  • FIRE opposes Virginia’s proposed regulation of candidate deepfakes

    FIRE opposes Virginia’s proposed regulation of candidate deepfakes

    Last year, California passed restrictions on sharing AI-generated deepfakes of candidates, which a court then promptly blocked for violating the First Amendment. Virginia now looks to be going down a similar road with a new bill to penalize people for merely sharing certain AI-generated media of political candidates.

    This legislation, which has been in SB 775 and HB 2479, would make it illegal to share artificially generated, realistic-looking images, video, or audio of a candidate to “influence an election,” if the person knew or should have known that the content is “deceptive or misleading.” There is a civil penalty or, if the sharing occurred within 90 days before an election, up to one year in jail. Only if a person adds a conspicuous disclaimer to the media can they avoid these penalties.

    The practical effects of this ban are alarming. Say a person in Virginia encounters a deepfaked viral video of a candidate on Facebook within 90 days of an election. They know it’s not a real image of the candidate, but they think it’s amusing and captures a message they want to share with other Virginians. It doesn’t have a disclaimer, but the person doesn’t know it’s supposed to, and doesn’t know how to edit the video anyway. They decide to repost it to their feed.

    That person could now face jailtime.

    The ban would also impact the media. Say a journalist shares a deepfake that is directly relevant to an important news story. The candidate depicted decides that the journalist didn’t adequately acknowledge “in a manner that can easily be heard and understood by the average listener or viewer, that there are questions about the authenticity of the media,” as the bill requires. That candidate could sue to block further sharing of the news story.

    The First Amendment safeguards expressive tools like AI, allowing them to enhance our ability to communicate with one another without facing undue government restrictions.

    These illustrate the startling breadth of SB 775/HB 2479’s regulation of core political speech, which makes it unlikely to survive judicial scrutiny. Laws targeting core political speech have serious difficulty passing constitutional muster, even when they involve false or misleading speech. That’s because there’s no general First Amendment exception for misinformation, disinformation, or other false speech. That’s for good reason: A general exception would be easily abused to suppress dissent and criticism.

    Wave of state-level AI bills raise First Amendment problems

    News

    There’s no ‘artificial intelligence’ exception to the First Amendment.


    Read More

    There are narrow, well-defined categories of speech not protected by the First Amendment — such as fraud and defamation — that Virginia can and does already restrict. But SB 775/HB 2479 is not limited to fraudulent or defamatory speech.

    For laws that burden protected speech related to elections, it is a very high bar to pass constitutional muster. This bill doesn’t meet that bar. It restricts far more speech than necessary to prevent voters from being deceived in ways that would have any effect on an election, and there are other ways to address deepfakes that would burden much less speech. For one, other speakers or candidates can (and do) simply point them out, eroding their potential to deceive.

    The First Amendment safeguards expressive tools like AI, allowing them to enhance our ability to communicate with one another without facing undue government restrictions.

    We urge the Virginia General Assembly to oppose this legislation. If it gets to his desk, Virginia Gov. Glenn Youngkin should veto.

    Source link

  • CUPA-HR Files Comment Extension Request to USDA Regarding New Blacklisting Regulation for Federal Contractors – CUPA-HR

    CUPA-HR Files Comment Extension Request to USDA Regarding New Blacklisting Regulation for Federal Contractors – CUPA-HR

    by CUPA-HR | March 21, 2022

    On February 17, the U.S. Department of Agriculture (USDA) issued a Notice of Proposed Rulemaking (NPRM) outlining plans to impose new HR-related conditions on USDA contracts. If finalized, the rule would require federal contractors on projects procured by the USDA to certify their compliance with dozens of federal and state labor laws and executive orders. The proposal mirrors similar “blacklisting” regulations pursued by the USDA during the Obama administration.

    The USDA provided only 32 days for stakeholders to submit comments on the proposal. CUPA-HR, along with several other higher education associations, filed an extension request with the department asking for an additional 90 days to “evaluate the NPRM’s impact on [members’] research missions and collect the information needed in order to provide thoughtful and accurate input to the USDA.” CUPA-HR plans to file comments on the proposal as well.

    The new proposed rulemaking amends the Agriculture Acquisition Regulation (AGAR) to require federal contractors on USDA supply and service projects that exceed the simplified acquisition threshold to certify that they and their subcontractors and suppliers are “in compliance with” 15 federal labor laws, their state equivalents and executive orders. This includes, but is not limited to:

    • Fair Labor Standards Act;
    • Occupational Safety and Health Act;
    • National Labor Relations Act;
    • Service Contract Act;
    • Davis-Bacon Act;
    • Title VII of the Civil Rights Act;
    • Americans with Disabilities Act;
    • Age Discrimination in Employment Act; and
    • Family and Medical Leave Act.

    Additionally, federal contractors submitting offers for a project would be required to disclose to the USDA previous violations and certify they and their subcontractors “are in compliance with” any required corrective actions for those violations. They would also be required to alert USDA to any future adjudications of non-compliance.

    In 2011, the USDA tried to implement a similar policy via a Direct Final Rule and NPRM, but was forced to withdraw both due to stakeholder pushback. CUPA-HR filed comments with the Society for Human Resource Management calling the rules arbitrary and capricious. Our comments also criticized the rules for not adequately clarifying how contractors were expected to comply with the changes and for imposing severe penalties. Additionally, CUPA-HR joined comments filed by the American Council on Education and several other higher education associations that argued the USDA’s rules “impose[d] an unmanageable compliance burden and uncertain compliance risk for colleges and universities that conduct agricultural research under contracts with the [USDA].”

    Additionally, the Obama administration issued an executive order in July 2014 implementing a similar government-wide policy. The Federal Acquisition Regulation (FAR) Council and the Department of Labor issued regulations and guidance, respectively, implementing the order, but they were blocked by a federal judge in October 2016 for violating the First Amendment and due process rights. Congress also passed a Congressional Review Act challenge to the executive order in 2017, permanently withdrawing the executive order and barring the FAR Council from issuing any substantially similar regulations.

    Unlike past proposals, this time the USDA has stated that the certifications will be subject to the False Claims Act (FCA), which provides for substantially increased liability. The FCA provides for treble damages and penalties and allows for private citizens to file suits on behalf of the government (called “qui tam” suits). Qui tam litigants receive a portion of the government’s recovery. According to the Department of Justice (DOJ), the awards to qui tam litigants in FCA suits topped $238 million in 2021. The same DOJ statistics show qui tam suits were the majority of FCA claims, with the government filing 203 new suits under FCA in 2021 compared to 598 qui tam suits in the same year.

    CUPA-HR will continue to monitor this issue closely.



    Source link