Tag: regulate

  • FIRE statement on legislative proposals to regulate artificial intelligence

    FIRE statement on legislative proposals to regulate artificial intelligence

    As the 2025 legislative calendar begins, FIRE is preparing for lawmakers at both the state and federal levels to introduce a deluge of bills targeting artificial intelligence. 

    The First Amendment applies to artificial intelligence just as it does to other expressive technologies. Like the printing press, the camera, and the internet, AI can be used as an expressive tool — a technological advance that helps us communicate with one another and generate knowledge. As FIRE Executive Vice President Nico Perrino argued in The Los Angeles Times last month: “The Constitution shouldn’t be rewritten for every new communications technology.” 

    We again remind legislators that existing laws — cabined by the narrow, well-defined exceptions to the First Amendment’s broad protection — already address the vast majority of harms legislatures may seek to counter in the coming year. Laws prohibiting fraud, forgery, discrimination, and defamation, for example, apply regardless of how the unlawful activity is ultimately carried out. Liability for unlawful acts properly falls on the perpetrator of those acts, not the informational or communicative tools they use. 

    Some legislative initiatives seeking to govern the use of AI raise familiar First Amendment problems. For example, regulatory proposals that would require “watermarks” on artwork created by AI or mandate disclaimers on content generated by AI violate the First Amendment by compelling speech. FIRE has argued against these kinds of efforts to regulate the use of AI, and we will continue to do so — just as we have fought against government attempts to compel speech in school, on campus, or online

    Rather than compelling disclaimers or imposing content-based restrictions on AI-generated expression, legislators should remember the law already protects against defamation, fraud, and other illegal conduct. 

    Lawmakers have also sought to regulate or even criminalize the use of AI-generated content in election-related communications. But courts have been wary of legislative attempts to control AI’s output when political speech is implicated. Following a First Amendment challenge from a satirist who uses AI to generate parodies of political figures, for example, a federal district court recently enjoined a California statute aimed at “deepfakes” that regulated “materially deceptive” election-related content. 

    Content-based restrictions like California’s law require strict judicial scrutiny, no matter how the expression is created. As the federal court noted, the constitutional protections “safeguarding the people’s right to criticize government and government officials apply even in the new technological age when media may be digitally altered.” So while lawmakers might harbor “a well-founded fear of a digitally manipulated media landscape,” the court explained, “this fear does not give legislators unbridled license to bulldoze over the longstanding tradition of critique, parody, and satire protected by the First Amendment.” 

    Artificial intelligence, free speech, and the First Amendment

    Issue Pages

    FIRE offers an analysis of frequently asked questions about artificial intelligence and its possible implications for free speech and the First Amendment.


    Read More

    Other legislative proposals threaten the First Amendment by imposing burdens directly on the developers of AI models. In the coming months, for example, Texas lawmakers will consider the Texas Responsible Artificial Intelligence Governance Act, or TRAIGA, a sweeping bill that would impose liability on developers, distributors, and deployers of AI systems that may introduce a risk of “algorithmic discrimination,” including by private actors. The bill vests broad regulatory authority in a newly created state “Artificial Intelligence Council” and imposes steep compliance costs. TRAIGA compels developers to publish regular risk reports, a requirement that will raise First Amendment concerns when applied to an AI model’s expressive output or the use of AI as a tool to facilitate protected expression. Last year, a federal court held a similar reporting requirement imposed on social media platforms was likely unconstitutional.

    TRAIGA’s provisions incentivize AI developers to handicap their models to avoid any possibility of offering recommendations that some might deem discriminatory or simply offensive — even if doing so curtails the models’ usefulness or capabilities. Addressing unlawful discrimination is an important legislative aim, and lawmakers are obligated to ensure we all benefit from the equal protection of the law. At the same time, our decades of work defending student and faculty rights has left FIRE all too familiar with the chilling effect on speech that results from expansive or arbitrary interpretations of anti-discrimination law on campus. We will oppose poorly crafted legislative efforts that would functionally build the same chill into artificial intelligence systems.

    The sprawling reach of legislative proposals like TRAIGA run headlong into the expressive rights of the people building and using AI models. Rather than compelling disclaimers or imposing content-based restrictions on AI-generated expression, legislators should remember the law already protects against defamation, fraud, and other illegal conduct. And rather than preemptively saddling developers with broad liability for an AI model’s possible output, lawmakers must instead examine the recourse existing laws already provide victims of discrimination against those who would use AI — or any other communicative tool — to unlawful ends.

    FIRE will have more to say on the First Amendment threats presented by legislative proposals regarding AI in the weeks and months to come.

    Source link

  • California and other states are rushing to regulate AI. This is what they’re missing

    California and other states are rushing to regulate AI. This is what they’re missing

    This article was originally published in December 2024 in the opinion page of The Los Angeles Times and is republished here with permission.


    The Constitution shouldn’t be rewritten for every new communications technology. The Supreme Court reaffirmed this long-standing principle during its most recent term in applying the 1st Amendment to social media. The late Justice Antonin Scalia articulated it persuasively in 2011, noting that “whatever the challenges of applying the Constitution to ever-advancing technology, the basic principles of freedom of speech and the press … do not vary.”

    These principles should be front of mind for congressional Republicans and David Sacks, Trump’s recently chosen artificial intelligence czar, as they make policy on that emerging technology. The 1st Amendment standards that apply to older communications technologies must also apply to artificial intelligence, particularly as it stands to play an increasingly significant role in human expression and learning.

    But revolutionary technological change breeds uncertainty and fear. And where there is uncertainty and fear, unconstitutional regulation inevitably follows. According to the National Conference of State Legislatures, lawmakers in at least 45 states have introduced bills to regulate AI this year, and 31 states adopted laws or resolutions on the technology. Congress is also considering AI legislation.

    Many of these proposals respond to concerns that AI will supercharge the spread of misinformation. While the worry is understandable, misinformation is not subject to any categorical exemption from 1st Amendment protections. And with good reason: As Supreme Court Justice Robert Jackson observed in 1945, the Constitution’s framers “did not trust any government to separate the true from the false for us,” and therefore “every person must be his own watchman for truth.”

    California nevertheless enacted a law in September targeting “deceptive,” digitally modified content about political candidates. The law was motivated partly by an AI-altered video parodying Vice President Kamala Harris’ candidacy that went viral earlier in the summer.

    Two weeks after the law went into effect, a judge blocked it, writing that the “principles safeguarding the people’s right to criticize government … apply even in the new technological age” and that penalties for such criticism “have no place in our system of governance.”

    Ultimately, we don’t need new laws regulating most uses of AI; existing laws will do just fine. Defamation, fraud, false light and forgery laws already address the potential of deceptive expression to cause real harm. And they apply regardless of whether the deception is enabled by a radio broadcast or artificial intelligence technology. The Constitution should protect novel communications technology not just so we can share AI-enhanced political memes. We should also be able to freely harness AI in pursuit of another core 1st Amendment concern: knowledge production.

    When we think of free expression guarantees, we often think of the right to speak. But the 1st Amendment goes beyond that. As the Supreme Court held in 1969, “The Constitution protects the right to receive information and ideas.”

    Information is the foundation of progress. The more we have, the more we can propose and test hypotheses and produce knowledge.

    The internet, like the printing press, was a knowledge-accelerating innovation. But Congress almost hobbled development of the internet in the 1990s because of concerns that it would enable minors to access “indecent” content. Fortunately, the Supreme Court stood in its way by striking down much of the Communications Decency Act.

    Indeed, the Supreme Court’s application of the 1st Amendment to that new technology was so complete that it left Electronic Frontier Foundation attorney Mike Godwin wondering “whether I ought to retire from civil liberties work, my job being mostly done.” Godwin would go on to serve as general counsel for the Wikimedia Foundation, the nonprofit behind Wikipedia — which, he wrote, “couldn’t exist without the work that cyberlibertarians had done in the 1990s to guarantee freedom of expression and broader access to the internet.”

    Today humanity is developing a technology with even more knowledge-generating potential than the internet. No longer is knowledge production limited by the number of humans available to propose and test hypotheses. We can now enlist machines to augment our efforts.

    We are already starting to see the results: A researcher at the Massachusetts Institute of Technology recently reported that AI enabled a lab studying new materials to discover 44% more compounds. Dario Amodei, the chief executive of the AI company Anthropic, predicts that “AI-enabled biology and medicine will allow us to compress the progress that human biologists would have achieved over the next 50-100 years into 5-10 years.”

    This promise can be realized only if America continues to view the tools of knowledge production as legally inseparable from the knowledge itself. Yes, the printing press led to a surge of “misinformation.” But it also enabled the Enlightenment.

    The 1st Amendment is America’s great facilitator: Because of it, the government can no more regulate the printing press than it can the words printed on a page. We must extend that standard to artificial intelligence, the arena where the next great fight for free speech will be fought.

    Source link