Tag: regulate

  • The Office for Students steps on to shaky ground in an attempt to regulate academic standards

    The Office for Students steps on to shaky ground in an attempt to regulate academic standards

    The funny thing about the story about today’s intervention by the Office for Students is that it is not really about grade inflation, or degree algorithms.

    I mean, it is on one level: we get three investigation reports on providers related to registration condition B4, and an accompanying “lessons learned” report that focuses on degree algorithms.

    But the central question is about academic standards – how they are upheld, and what role an arm of the government has in upholding them.

    And it is about whether OfS has the ability to state that three providers are at “increased risk” of breaching a condition of registration on the scant evidence of grade inflation presented.

    And it is certainly about whether OfS is actually able to dictate (or even strongly hint at its revealed preferences on) the way degrees are awarded at individual providers, or the way academic standards are upheld.

    If you are looking for the rule book

    Paragraph 335N(b) of the OfS Regulatory Framework is the sum total of the advice it has offered before today to the sector on degree algorithms.

    The design of the calculations that take in a collection of module marks (each assessed carefully against criteria set out in the module handbook, and cross-checked against the understanding of what expectations of students should be offered by an academic from another university) into an award of a degree at a given classification is a potential area of concern:

    where a provider has changed its degree classification algorithm, or other aspects of its academic regulations, such that students are likely to receive a higher classification than previous students without an increase in their level of achievement.

    These circumstances could potentially be a breach of condition of registration B4, which relates to “Assessment and Awards” – specifically condition B4.2(c), which requires that:

    academic regulations are designed to ensure that relevant awards are credible;

    Or B4.2(e), which requires that:

    relevant awards granted to students are credible at the point of being granted and when compared to those granted previously

    The current version of condition B4 came into force in May 2022.

    In the mighty list of things that OfS needs to have regard to that we know and love (section 2 of the 2017 Higher Education and Research Act), we learn that OfS has to pay mind to “the need to protect the institutional autonomy of English higher education providers” – and, in the way it regulates that it should be:

    Transparent, accountable, proportionate, and consistent and […] targeted only at cases where action is needed

    Mutant algorithms

    With all this in mind, we look at the way the regulator has acted on this latest intervention on grade inflation.

    Historically the approach has been one of assessing “unexplained” (even once, horrifyingly, “unwarranted”) good honours (1 or 2:1) degrees. There’s much more elsewhere on Wonkhe, but in essence OfS came up with its own algorithm – taking into account the degrees awarded in 2010-11 and the varying proportions students in given subject areas, with given A levels and of a given age – that starts from the position that non-traditional students shouldn’t be getting as many good grades as their (three good A level straight from school) peers, and if they did then this was potentially evidence of a problem.

    To quote from annex B (“statistical modelling”) of last year’s release:

    “We interact subject of study, entry qualifications and age with year of graduation to account for changes in awarding […] our model allows us to statistically predict the proportion of graduates awarded a first or an upper second class degree, or a first class degree, accounting for the effects of these explanatory variables.

    When I wrote this up last year I did a plot of the impact each of these variables is expected to have on – the fixed effect coefficient estimates show the increase (or decrease) in the likelihood of a person getting a first or upper second class degree.

    [Full screen]

    One is tempted to wonder whether the bit of OfS that deals with this issue ever speaks to the bit that is determined to drive out awarding gaps based on socio-economic background (which, as we know, very closely correlates with A level results). This is certainly one way of explaining why – if you look at the raw numbers – the people who award more first class and 2:1 degrees are the Russell Group, and at small selective specialist providers.

    [Full screen]

    Based on this model (which for 2023-24 failed to accurately predict fully fifty per cent of the grades awarded) OfS selected – back in 2022(!) – three providers where it felt that the “unexplained” awards had risen surprisingly quickly over a single year.

    What OfS found (and didn’t find)

    Teesside University was not found to have ever been in breach of condition B4 – OfS was unable to identify statistically significant differences in the proportion of “good” honours awarded to a single cohort of students if it applied each of the three algorithms Teesside has used over the past decade or so. There has been – we can unequivocally say – no evidence of artificial grade inflation at Teesside University.

    St Mary’s University, Twickenham and the University of West London were found to have historically been in breach of condition B4. The St Mary’s issue related to an approach that was introduced in 2016-17 and was replaced in 2021-22, in West London the offending practice was introduced in 2015-16 and replaced in 2021-22. In both cases, the replacement was made because of an identified risk of grade inflation. And for each provider a small number of students may have had their final award calculated using the old approach since 2021-22, based on a need to not arbitrarily change an approach that students had already been told about.

    To be clear – there is no evidence that either university has breached condition B4 (not least because condition B4 came into force after the offending algorithms had been replaced). In each instance the provider in question has made changes based on the evidence it has seen that an aspect of the algorithm is not having the desired effect, exactly the way in which assurance processes should (and generally do) work.

    Despite none of the providers in question currently being in breach of B4 all three are now judged to be at an increased risk of breaching condition B4.

    No evidence has been provided as to why these three particular institutions are at an “increased risk” of a breach while others who may use substantially identical approaches to calculating final degree awards (but have not been lucky enough to undergo an OfS inspection on grade inflation) are not. Each is required to conduct a “calibration exercise” – basically a review of their approach to awarding undergraduate degrees of the sort each has already completed (and made changes based on) in recent years.

    Vibes-based regulation

    Alongside these three combined investigation/regulatory decision publications comes a report on Batchelors’ degree classification algorithms. This purports to set out the “lessons learned” from the three reports, but it actually sets up what amounts to a revision to condition B4.

    We recognise that we have not previously published our views relating to the use of algorithms in the awarding of degrees. We look forward to positive engagement with the sector about the contents of this report. Once the providers we have investigated have completed the actions they have agreed to undertake, we may update it to reflect the findings from those exercises.

    The important word here is “views”. OfS expresses some views on the design of degree algorithms, but it is not the first to do so and there are other equally valid views held by professional bodies, providers, and others – there is a live debate and a substantial academic literature on the topic. Academia is the natural home of this kind of exchange of views, and in the crucible of scholarly debate evidence and logical consistency are winning moves. Having looked at every algorithm he could find, Jim Dickinson covers the debates over algorithm characteristics elsewhere on the site.

    It does feel like these might be views expressed ahead of a change to condition B4 – something that OfS does have the power to do, but would most likely (in terms of good regulatory practice, and the sensitive nature of work related to academic standards managed elsewhere in the UK by providers themselves) be subject to a full consultation. OfS is suggesting that it is likely to find certain practices incompatible with the current B4 requirements – something which amounts to a de facto change in the rules even if it has been done under the guise of guidance.

    Providers are reminded that (as they are already expected to do) they must monitor the accuracy and reliability of current and future degree algorithms – and there is a new reportable event: providers need to tell OfS if they change their algorithm that may result in an increase of “good” honours degrees awarded.

    And – this is the kicker – when they do make these changes, the external calibration they do cannot relate to external examiner judgements. The belief here is that external examiners only ever work at a module level, and don’t have a view over an entire course.

    There is even a caveat – a provider might ask a current or former external examiner to take an external look at their algorithm in a calibration exercise, but the provider shouldn’t rely solely on their views as a “fresh perspective” is needed. This reads back to that rather confusing section of the recent white paper about “assessing the merits of the sector continuing to use the external examiner system” while apparently ignoring the bit around “building the evidence base” and “seeking employers views”.

    Academic judgement

    Historically, all this has been a matter for the sector – academic standards in the UK’s world-leading higher education sector have been set and maintained by academics. As long ago as 2019 the UK Standing Committee for Quality Assessment (now known as the Quality Council for UK Higher Education) published a Statement of Intent on fairness in degree classification.

    It is short, clear and to the point: as was then the fashion in quality assurance circles. Right now we are concerned with paragraph b, which commits providers to protecting the value of their degrees by:

    reviewing and explaining how their process for calculating final classifications, fully reflect student attainment against learning criteria, protect the integrity of classification boundary conventions, and maintain comparability of qualifications in the sector and over time

    That’s pretty uncontroversial, as is the recommended implementation pathway in England: a published “degree outcomes statement” articulating the results of an internal institutional review.

    The idea was that these statements would show the kind of quantitative trends that OfS get interested in, some assurance that these institutional assessment processes meet the reference points, and reflect the expertise and experience of external examiners, and provide a clear and publicly accessible rationale for the degree algorithm. As Jim sets out elsewhere, in the main this has happened – though it hasn’t been an unqualified success.

    To be continued

    The release of this documentation prompts a number of questions, both on the specifics of what is being done and more widely on the way in which this approach does (or does not) constitute good regulatory practice.

    It is fair to ask, for instance, whether OfS has the power to decide that it has concerns about particular degree awarding practices, even where it is unable to point to evidence that these practices are currently having a significant impact on degrees awarded, and to promote a de facto change in interpretation of regulation that will discourage their use.

    Likewise, it seems problematic that OfS believes it has the power to declare that the three providers it investigated are at risk of breaching a condition of registration because they have an approach to awarding degrees that it has decided that it doesn’t like.

    It is concerning that these three providers have been announced as being at higher risk of a breach when other providers with similar practices have not. It is worth asking whether this outcome meets the criteria for transparent, accountable, proportionate, and consistent regulatory practice – and whether it represents action being targeted only at cases where it is demonstrably needed.

    More widely, the power to determine or limit the role and purpose of external examiners in upholding academic standards has not historically been one held by a regulator acting on behalf of the government. The external examiner system is a “sector recognised standard” (in the traditional sense) and generally commands the confidence of registered higher education providers. And it is clearly a matter of institutional autonomy – remember in HERA OfS needs to “have regard to” institutional autonomy over assessment, and it is difficult to square this intervention with that duty.

    And there is the worry about the value and impact of sector consultation – an issue picked up in the Industry and Regulators Committee review of OfS. Should a regulator really be initiating a “dialogue with the sector” when its preferences on the external examiner system are already so clearly stated? And it isn’t just the sector – a consultation needs to ensure that the the views of employers (and other stakeholders, including professional bodies) are reflected in whatever becomes the final decision.

    Much of this may become clear over time – there is surely more to follow in the wider overhaul of assurance, quality, and standards regulation that was heralded in the post-16 white paper. A full consultation will help centre the views of employers, course leaders, graduates, and professional bodies – and the parallel work on bringing the OfS quality functions back into alignment with international standards will clearly also have an impact.

    Source link

  • Ohio enacted a law to regulate online program managers. Here’s what it does.

    Ohio enacted a law to regulate online program managers. Here’s what it does.

    This audio is auto-generated. Please let us know if you have feedback.

    In June, Ohio became the second state to regulate how colleges can use third-party vendors to help launch and operate their online degree programs. 

    Under a new law, both public and private colleges in Ohio must disclose on their websites for their online programs when they are using vendors to help run those offerings. Staff who work for these vendors, known as online program managers, must also identify themselves when talking to students. And it requires colleges to report OPM contracts annually to the state’s higher education chancellor. 

    The law, part of a larger state budget bill, additionally prohibits OPMs from making decisions about or disbursing student financial aid. 

    “Ohio’s law is a step in the right direction,” said Amber Villalobos, a fellow at The Century Foundation, a left-leaning think tank. “It’s great to see transparency laws because students will know who’s running their program, who’s teaching their programs.”

    The new law is the latest sign that states may take on a greater role in regulating OPM contracts, heeding calls by consumer advocates for stronger government oversight. 

    However, Villalobos said Ohio lawmakers could have improved the legislation by barring colleges from entering agreements that give OPMs a cut of tuition revenue for each student they recruit into an online program. Minnesota, the first state to pass a law regulating OPMs in 2024, prohibited its public colleges from striking tuition-share deals with these companies if they provide marketing or recruiting services. 

    U.S. law bars colleges that receive federal funding from giving incentive-based compensation to companies that recruit students into their programs. However, in 2011, federal guidance created an exception for colleges that enter tuition-share agreements with OPMs for recruiting services — but only if they are part of a larger bundle of services, such as curricular design and help with clinical placements. 

    But these deals have led to OPMs using misleading recruitment and marketing practices to enroll students and fill seats, Villalobos said. 

    “When tuition-sharing is used for marketing or recruiting purposes we’ve seen issues like predatory recruitment,” she said. 

    OPMs under scrutiny

    OPMs help colleges quickly set up and market online programs, said Phil Hill, an ed tech consultant. That’s important since launching a successful online program catering to nontraditional working adults can be challenging for colleges that typically enroll 18- to 24-year-olds, Hill said. 

    “It gives them a way to operate in the online space based on what students expect, but do it right away,” Hill said.

    However, OPM contracts have been subject to lawsuits and federal scrutiny in recent years. 

    In Ohio, for instance, legislators passed the new state law following Eastern Gateway Community College’s closure in 2024 after it offered tuition-free online college programs with an OPM. 

    After the college began working with the for-profit company Student Resource Center, its enrollment soared from just 3,182 students in fall 2014 to 45,173 enrollees by the fall 2021, according to federal data. Former employees of the college accused the relationship of turning the college into an education mill, Inside Higher Ed reported at the time

    By early 2022, the rapid enrollment growth and the college’s relationship with the Student Resource Center had attracted the attention of the U.S. Department of Education. 

    The federal agency alleged that year that the college’s free college initiative illegally charged students with Pell Grants more than those without. In response, the Education Department placed the college on Heightened Cash Monitoring 2 status, which forced the institution to pay its students’ federal financial aid out of pocket before seeking reimbursement from the agency. 

    In 2023, Eastern Gateway reached a deal with the Education Department to end its free college program. Its board of trustees voted to shutter the institution the following year.

    Source link

  • FIRE statement on legislative proposals to regulate artificial intelligence

    FIRE statement on legislative proposals to regulate artificial intelligence

    As the 2025 legislative calendar begins, FIRE is preparing for lawmakers at both the state and federal levels to introduce a deluge of bills targeting artificial intelligence. 

    The First Amendment applies to artificial intelligence just as it does to other expressive technologies. Like the printing press, the camera, and the internet, AI can be used as an expressive tool — a technological advance that helps us communicate with one another and generate knowledge. As FIRE Executive Vice President Nico Perrino argued in The Los Angeles Times last month: “The Constitution shouldn’t be rewritten for every new communications technology.” 

    We again remind legislators that existing laws — cabined by the narrow, well-defined exceptions to the First Amendment’s broad protection — already address the vast majority of harms legislatures may seek to counter in the coming year. Laws prohibiting fraud, forgery, discrimination, and defamation, for example, apply regardless of how the unlawful activity is ultimately carried out. Liability for unlawful acts properly falls on the perpetrator of those acts, not the informational or communicative tools they use. 

    Some legislative initiatives seeking to govern the use of AI raise familiar First Amendment problems. For example, regulatory proposals that would require “watermarks” on artwork created by AI or mandate disclaimers on content generated by AI violate the First Amendment by compelling speech. FIRE has argued against these kinds of efforts to regulate the use of AI, and we will continue to do so — just as we have fought against government attempts to compel speech in school, on campus, or online

    Rather than compelling disclaimers or imposing content-based restrictions on AI-generated expression, legislators should remember the law already protects against defamation, fraud, and other illegal conduct. 

    Lawmakers have also sought to regulate or even criminalize the use of AI-generated content in election-related communications. But courts have been wary of legislative attempts to control AI’s output when political speech is implicated. Following a First Amendment challenge from a satirist who uses AI to generate parodies of political figures, for example, a federal district court recently enjoined a California statute aimed at “deepfakes” that regulated “materially deceptive” election-related content. 

    Content-based restrictions like California’s law require strict judicial scrutiny, no matter how the expression is created. As the federal court noted, the constitutional protections “safeguarding the people’s right to criticize government and government officials apply even in the new technological age when media may be digitally altered.” So while lawmakers might harbor “a well-founded fear of a digitally manipulated media landscape,” the court explained, “this fear does not give legislators unbridled license to bulldoze over the longstanding tradition of critique, parody, and satire protected by the First Amendment.” 

    Artificial intelligence, free speech, and the First Amendment

    Issue Pages

    FIRE offers an analysis of frequently asked questions about artificial intelligence and its possible implications for free speech and the First Amendment.


    Read More

    Other legislative proposals threaten the First Amendment by imposing burdens directly on the developers of AI models. In the coming months, for example, Texas lawmakers will consider the Texas Responsible Artificial Intelligence Governance Act, or TRAIGA, a sweeping bill that would impose liability on developers, distributors, and deployers of AI systems that may introduce a risk of “algorithmic discrimination,” including by private actors. The bill vests broad regulatory authority in a newly created state “Artificial Intelligence Council” and imposes steep compliance costs. TRAIGA compels developers to publish regular risk reports, a requirement that will raise First Amendment concerns when applied to an AI model’s expressive output or the use of AI as a tool to facilitate protected expression. Last year, a federal court held a similar reporting requirement imposed on social media platforms was likely unconstitutional.

    TRAIGA’s provisions incentivize AI developers to handicap their models to avoid any possibility of offering recommendations that some might deem discriminatory or simply offensive — even if doing so curtails the models’ usefulness or capabilities. Addressing unlawful discrimination is an important legislative aim, and lawmakers are obligated to ensure we all benefit from the equal protection of the law. At the same time, our decades of work defending student and faculty rights has left FIRE all too familiar with the chilling effect on speech that results from expansive or arbitrary interpretations of anti-discrimination law on campus. We will oppose poorly crafted legislative efforts that would functionally build the same chill into artificial intelligence systems.

    The sprawling reach of legislative proposals like TRAIGA run headlong into the expressive rights of the people building and using AI models. Rather than compelling disclaimers or imposing content-based restrictions on AI-generated expression, legislators should remember the law already protects against defamation, fraud, and other illegal conduct. And rather than preemptively saddling developers with broad liability for an AI model’s possible output, lawmakers must instead examine the recourse existing laws already provide victims of discrimination against those who would use AI — or any other communicative tool — to unlawful ends.

    FIRE will have more to say on the First Amendment threats presented by legislative proposals regarding AI in the weeks and months to come.

    Source link

  • California and other states are rushing to regulate AI. This is what they’re missing

    California and other states are rushing to regulate AI. This is what they’re missing

    This article was originally published in December 2024 in the opinion page of The Los Angeles Times and is republished here with permission.


    The Constitution shouldn’t be rewritten for every new communications technology. The Supreme Court reaffirmed this long-standing principle during its most recent term in applying the 1st Amendment to social media. The late Justice Antonin Scalia articulated it persuasively in 2011, noting that “whatever the challenges of applying the Constitution to ever-advancing technology, the basic principles of freedom of speech and the press … do not vary.”

    These principles should be front of mind for congressional Republicans and David Sacks, Trump’s recently chosen artificial intelligence czar, as they make policy on that emerging technology. The 1st Amendment standards that apply to older communications technologies must also apply to artificial intelligence, particularly as it stands to play an increasingly significant role in human expression and learning.

    But revolutionary technological change breeds uncertainty and fear. And where there is uncertainty and fear, unconstitutional regulation inevitably follows. According to the National Conference of State Legislatures, lawmakers in at least 45 states have introduced bills to regulate AI this year, and 31 states adopted laws or resolutions on the technology. Congress is also considering AI legislation.

    Many of these proposals respond to concerns that AI will supercharge the spread of misinformation. While the worry is understandable, misinformation is not subject to any categorical exemption from 1st Amendment protections. And with good reason: As Supreme Court Justice Robert Jackson observed in 1945, the Constitution’s framers “did not trust any government to separate the true from the false for us,” and therefore “every person must be his own watchman for truth.”

    California nevertheless enacted a law in September targeting “deceptive,” digitally modified content about political candidates. The law was motivated partly by an AI-altered video parodying Vice President Kamala Harris’ candidacy that went viral earlier in the summer.

    Two weeks after the law went into effect, a judge blocked it, writing that the “principles safeguarding the people’s right to criticize government … apply even in the new technological age” and that penalties for such criticism “have no place in our system of governance.”

    Ultimately, we don’t need new laws regulating most uses of AI; existing laws will do just fine. Defamation, fraud, false light and forgery laws already address the potential of deceptive expression to cause real harm. And they apply regardless of whether the deception is enabled by a radio broadcast or artificial intelligence technology. The Constitution should protect novel communications technology not just so we can share AI-enhanced political memes. We should also be able to freely harness AI in pursuit of another core 1st Amendment concern: knowledge production.

    When we think of free expression guarantees, we often think of the right to speak. But the 1st Amendment goes beyond that. As the Supreme Court held in 1969, “The Constitution protects the right to receive information and ideas.”

    Information is the foundation of progress. The more we have, the more we can propose and test hypotheses and produce knowledge.

    The internet, like the printing press, was a knowledge-accelerating innovation. But Congress almost hobbled development of the internet in the 1990s because of concerns that it would enable minors to access “indecent” content. Fortunately, the Supreme Court stood in its way by striking down much of the Communications Decency Act.

    Indeed, the Supreme Court’s application of the 1st Amendment to that new technology was so complete that it left Electronic Frontier Foundation attorney Mike Godwin wondering “whether I ought to retire from civil liberties work, my job being mostly done.” Godwin would go on to serve as general counsel for the Wikimedia Foundation, the nonprofit behind Wikipedia — which, he wrote, “couldn’t exist without the work that cyberlibertarians had done in the 1990s to guarantee freedom of expression and broader access to the internet.”

    Today humanity is developing a technology with even more knowledge-generating potential than the internet. No longer is knowledge production limited by the number of humans available to propose and test hypotheses. We can now enlist machines to augment our efforts.

    We are already starting to see the results: A researcher at the Massachusetts Institute of Technology recently reported that AI enabled a lab studying new materials to discover 44% more compounds. Dario Amodei, the chief executive of the AI company Anthropic, predicts that “AI-enabled biology and medicine will allow us to compress the progress that human biologists would have achieved over the next 50-100 years into 5-10 years.”

    This promise can be realized only if America continues to view the tools of knowledge production as legally inseparable from the knowledge itself. Yes, the printing press led to a surge of “misinformation.” But it also enabled the Enlightenment.

    The 1st Amendment is America’s great facilitator: Because of it, the government can no more regulate the printing press than it can the words printed on a page. We must extend that standard to artificial intelligence, the arena where the next great fight for free speech will be fought.

    Source link