North Carolina’s Democratic governor has vetoed two bills the Republican-led General Assembly passed targeting what lawmakers dubbed “diversity, equity and inclusion”; “discriminatory practices”; and “divisive concepts” in public higher education.
Senate Bill 558 would have banned institutions from having offices “promoting discriminatory practices or divisive concepts” or focused on DEI. The bill defined “discriminatory practices” as “treating an individual differently [based on their protected federal law classification] solely to advantage or disadvantage that individual as compared to other individuals or groups.”
SB 558’s list of restricted divisive concepts mirrored the lists that Republicans have inserted into laws in other states, including the idea that “a meritocracy is inherently racist or sexist” or that “the rule of law does not exist.” The legislation would have prohibited colleges and universities from endorsing these concepts.
The bill would have also banned institutions from establishing processes “for reporting or investigating offensive or unwanted speech that is protected by the First Amendment, including satire or speech labeled as microaggression.”
In his veto message Thursday, Gov. Josh Stein wrote, “Diversity is our strength. We should not whitewash history, police dorm room conversations, or ban books. Rather than fearing differing viewpoints and cracking down on free speech, we should ensure our students learn from diverse perspectives and form their own opinions.”
Stein also vetoed House Bill 171, which would have broadly banned DEI from state government. It defined DEI in multiple ways, including the promotion of “differential treatment of or providing special benefits to individuals on the basis of race, sex, color, ethnicity, nationality, country of origin, or sexual orientation.”
“House Bill 171 is riddled with vague definitions yet imposes extreme penalties for unknowable violations,” Stein wrote in his HB 171 veto message. NC Newsline reported that lawmakers might still override the vetoes.
Good news for Texans who like their speech free. Three bills that would have gutted speech protections under the Texas Citizens Participation Act are officially dead in the water.
At the start of the 2025 legislative session, FIRE teamed up with the Protect Free Speech Coalition — a broad coalition of civil liberties groups, news outlets, and other organizations that support free speech in Texas — to fight these bills.
The TCPA protects free speech by deterring frivolous lawsuits, or SLAPPs (strategic lawsuits against public participation), intended to silence citizens with the threat of court costs.
SLAPPs are censorship disguised as lawsuits. And laws like the TCPA are a vital defense against them.
The first bill, HB 2988, would have eroded the TCPA by cutting its provision of mandatory attorney fees for speakers who successfully get a SLAPP dismissed.
That provision ensures two very important things.
First, it makes potential SLAPP filers think twice before suing. The prospect of having to pay attorney’s fees for suing over protected speech causes would-be SLAPP filers to back off.
Second, when a SLAPP is filed, mandatory fees ensure the victim can afford to defend their First Amendment rights. They no longer face the impossible choice between self-censorship and blowing their life savings on legal fees. Instead, they can fight back, knowing that they can recover their legal fees when they successfully defend their constitutionally protected expression against a baseless lawsuit.
Even though the Constitution — and not one’s finances — guarantees the freedom to speak out about issues affecting their community and government, making TCPA fee-shifting discretionary would have undermined that freedom for all but the most deep-pocketed Texans.
FIRE’s own JT Morris testified in opposition to HB 2988 when it received a hearing in the Judiciary & Civil Jurisprudence committee.
The other two bills — SB 336 and HB 2459 — would have made it easier for SLAPP filers to run up their victim’s legal bills before the case gets dismissed, thereby putting pressure on victims to settle and give up their rights.
Since last fall, FIRE has been working with the Protect Free Speech Coalition to oppose these bills. We’ve met with lawmakers, testified in committee, publishedcommentary, and driven grassroots opposition.
All three bills are now officially dead for the 2025 legislative session, which ends today. That means one of the strongest anti-SLAPP laws in the country remains intact and Texans can continue speaking freely without fear of ruinous litigation.
Make no mistake: SLAPPs are censorship disguised as lawsuits. And laws like the TCPA are a vital defense against them. That defense still stands. And the First Amendment still protects you and your speech on important public issues — no matter how much money’s in your wallet.
AI is enhancing our ability to communicate, much like the printing press and the internet did in the past. And lawmakers nationwide are rushing to regulate its use, introducing hundreds of bills in states across the country. Unfortunately, many AI bills we’ve reviewed would violate the First Amendment — just as FIRE warned against last month. It’s worth repeating that First Amendment doctrine does not reset itself after each technological advance. It protects speech created or modified with artificial intelligence software just as it does to speech created without it.
On the flip side, AI’s involvement doesn’t change the illegality of acts already forbidden by existing law. There are some narrow, well-defined categories of speech not protected by the First Amendment — such as fraud, defamation, and speech integral to criminal conduct — that states can and do already restrict. In that sense, the use of AI is already regulated, and policymakers should first look to enforcement of those existing laws to address their concerns with AI. Further restrictions on speech are both unnecessary and likely to face serious First Amendment problems, which I detail below.
Constitutional background: Watermarking and other compelled disclosure of AI use
We’re seeing a lot of AI legislation that would require a speaker to disclose their use of AI to generate or modify text, images, audio, or video. Generally, this includes requiring watermarks on images created with AI, mandating disclaimers in audio and video generated with AI, and forcing developers to add metadata to images created with their software.
Many of these bills violate the First Amendment by compelling speech. Government-compelled speech—whether that speech is an opinion, or fact, or even just metadata—is generally anathema to the First Amendment. That’s for good reason: Compelled speech undermines everyone’s right to conscience and fundamental autonomy to control their own expression.
To illustrate: Last year, in X Corp. v. Bonta, the U.S. Court of Appeals for the Ninth Circuit reviewed a California law that required social media companies to post and report information about their content moderation practices. FIRE filed an amicus curiae — “friend of the court” — brief in that case, arguing the posting and reporting requirements unconstitutionally compel social media companies to speak about topics on which they’d like to remain silent. The Ninth Circuit agreed, holding the law was likely unconstitutional. While acknowledging the state had an interest in providing transparency, the court reaffirmed that “even ‘undeniably admirable goals’ ‘must yield’ when they ‘collide with the . . . Constitution.’”
There are (limited) exceptions to the principle that the state cannot compel speech. In some narrow circumstances, the government may compel the disclosure of information. For example, for speech that proposes a commercial transaction, the government may require disclosure of uncontroversial, purely factual information to prevent consumer deception. (For example, under this principle, the D.C. Circuit allowed federal regulators to require disclosure of country-of-origin information about meat products.)
But none of those recognized exceptions would permit the government to mandate blanket disclosure of AI-generated or modified speech. States seeking to require such disclosures will face heightened scrutiny beyond what is required for commercial speech.
AI disclosure and watermarking bills
This year, we’re also seeing lawmakers introduce many bills that require certain disclosures whenever speakers use AI to create or modify content, regardless of the nature of the content. These bills include Washington’s HB 1170, Massachusetts’s HD 1861, New York’s SB 934, and Texas’s SB 668.
At a minimum, the First Amendment requires these kinds of regulations to be tailored to address a particular state interest. But these bills are not aimed at any specific problem at all, much less being tailored to it; instead, they require nearly all AI-generated media to bear a digital disclaimer.
For example, FIRE recently testified against Washington’s HB 1170, which requires covered providers of AI to include in any AI-generated images, videos, or audio a latent disclosure detectable by an AI detection tool that the bill also requires developers to offer.
Of course, developers and users can choose to disclose their use of AI voluntarily. But bills like HB 1170 force disclosure in constitutionally suspect ways because they aren’t aimed at furthering any particular governmental interest and they burden a wide range of speech.
Because no reliable technology exists to detect whether media has been produced by AI, candidates can easily weaponize these laws to challenge all campaign-related media that they simply do not like.
In fact, if the government’s goal is addressing fraud or other unlawful deception, there are ways these disclosures could make things worse. First, the disclosure requirement will taint the speech of non-malicious AI users by fostering the false impression that their speech is deceptive, even if it isn’t. Second, bad actors can and will find ways around the disclosure mandate — including using AI tools in other states or countries, or just creating photorealistic content through other means. False content produced by bad actors will then have a much greater imprimatur of legitimacy than it would in a world without the disclosures required by this bill, because people will assume that content lacking the mandated disclosure was not created with AI.
A handful of bills introduced this year seek to categorically ban “deepfakes.” In other words, these bills would make it unlawful to create or share AI-generated content depicting someone saying or doing something that the person did not in reality say or do.
Categorical exceptions to the First Amendment exist, but these exceptions are few, narrow, and carefully defined. Take, for example, false or misleading speech. There is no general First Amendment exception for misinformation or disinformation or other false speech. Such an exception would be easily abused to suppress dissent and criticism.
There are, however, narrow exceptions for deceptive speech that constitutes fraud, defamation, or appropriation. In the case of fraud, the government can impose liability on speakers who knowingly make factual misrepresentations to obtain money or some other material benefit. For defamation, the government can impose liability for false, derogatory speech made with the requisite intent to harm another’s reputation. For appropriation, the government can impose liability for using another person’s name or likeness without permission, for commercial purposes.
Misinformation versus disinformation, explained
Issue Pages
Confusingly, the terms are used interchangeably. But they are different — and the distinction matters.
Like an email message or social media post, AI-generated content can fall under one of these categories of unprotected speech, but the Supreme Court has never recognized a categorical exception for creating photorealistic images or video of another person. Context always matters.
Although some people will use AI tools to produce unlawful or unprotected speech, the Court has never permitted the government to institute a broad technological ban that would stifle protected speech on the grounds that the technology has a potential for misuse. Instead, the government must tailor its regulation to the problem it’s trying to solve — and even then, the regulation will still fail judicial scrutiny if it burdens too much protected speech.
AI-generated content has a wide array of potential applications, spanning from political commentary and parody to art, entertainment, education, and outreach. Users have deployed AI technology to create political commentary, like the viral deepfake of Mark Zuckerberg discussing his control over user data — and for parody, as seen in the Donald Trump pizza commercial and the TikTok account dedicated to satirizing Tom Cruise. In the realm of art and entertainment, the Dalí Museum used deepfake technology to bring the artist back to life, and the TV series “The Mandalorian” recreated a young Luke Skywalker. Deepfakes have even been used for education and outreach, with a deepfake of David Beckham raising awareness about malaria.
These examples should not be taken to suggest that AI is always a positive force for shaping public discourse. It’s not. But not only will categorical bans on deepfakes restrict protected expression such as the examples above, they’ll face — and are highly unlikely to survive — the strictest judicial scrutiny under the First Amendment.
Categorical deepfake prohibition bills
Bills with categorical deepfake prohibitions include North Dakota’s HB 1320 and Kentucky’s HB 21.
North Dakota’s HB 1320, a failed bill that FIRE opposed, is a clear example of what would have been an unconstitutional categorical ban on deepfakes. The bill would have made it a misdemeanor to “intentionally produce, possess, distribute, promote, advertise, sell, exhibit, broadcast, or transmit” a deepfake without the consent of the person depicted. It defined a deepfake as any digitally-altered or AI-created “video or audio recording, motion picture film, electronic image, or photograph” that deceptively depicts something that did not occur in reality and includes the digitally-altered or AI-created voice or image of a person.
This bill was overly broad and would criminalize vast amounts of protected speech. It was so broad that it would be like making it illegal to paint a realistic image of a busy public park without obtaining everyone’s consent. Why make it illegal for that same painter to take their realistic painting and bring it to life with AI technology?
Artificial intelligence, free speech, and the First Amendment
Issue Pages
FIRE offers an analysis of frequently asked questions about artificial intelligence and its possible implications for free speech and the First Amendment.
HB 1320 would have prohibited the creation and distribution of deepfakes regardless of whether they cause actual harm. But, as noted, there isn’t a categorical exception to the First Amendment for false speech, and deceptive speech that causes specific, targeted harm to individuals is already punishable under narrowly defined First Amendment exceptions. If, for example, someone creates and distributes to other people a deepfake showing someone doing something they didn’t in reality do, thus effectively serving as a false statement of fact, the depicted individual could sue for defamation if they suffered reputational harm. But this doesn’t require a new law.
Even if HB 1320 were limited to defamatory speech, enacting new, technology-specific laws where existing, generally applicable laws already suffice risks sowing confusion that will ultimately chill protected speech. Such technology-specific laws are also easily rendered obsolete and ineffective by rapidly advancing technology.
HB 1320’s overreach clashed with clear First Amendment protections. Fortunately, the bill failed to pass.
Constitutional background: Election-related AI regulations
Another large bucket of bills that we’re seeing would criminalize or create civil liability for the use of AI-generated content in election-related communications, without regard to whether the content is actually defamatory.
Like categorical bans on AI, regulations of political speech have serious difficulty passing constitutional muster. Political speech receives strong First Amendment protection and the Supreme Court has recognized it as essential for our system of government: “Discussion of public issues and debate on the qualifications of candidates are integral to the operation of the system of government established by our Constitution.”
Under strict scrutiny, prohibitions or restrictions on AI-modified or generated media relating to elections will face an uphill battle.
As noted above, the First Amendment protects a great deal of false speech, so these regulations will be subject to strict scrutiny when challenged in court. This means the government must prove the law is necessary to serve a compelling state interest and is narrowly tailored to achieving that interest. Narrow tailoring in strict scrutiny requires that the state meet its interest using the least speech-restrictive means.
This high bar protects the American people from poorly tailored regulations of political speech that chill vital forms of political discourse, including satire and parody. Vigorously protecting free expression ensures robust democratic debate, which can counter deceptive speech more effectively than any legislation.
Under strict scrutiny, prohibitions or restrictions on AI-modified or generated media relating to elections will face an uphill battle. No elections in the United States have been decided, or even materially impacted, by any AI-generated media, so the threat — and the government’s interest in addressing it — remains hypothetical. Even if that connection was established, many of the current bills are not narrowly tailored; they would burden all kinds of AI-generated political speech that poses no threat to elections. Meanwhile, laws against defamation already provide an alternative means for candidates to address deliberate lies that harm them through reputational damage.
Already, a court has blocked one of these laws on First Amendment grounds. In a First Amendment challenge from a satirist who uses AI to generate parodies of political figures, a federal court recently applied strict scrutiny and blocked a California statute aimed at “deepfakes” that regulated “materially deceptive” election-related content.
Election-related AI bills
Unfortunately, many states have jumped on the bandwagon to regulate AI-generated media relating to elections. In December, I wrote about two bills in Texas — HB 556 and HB 228 — that would criminalize AI-generated content related to elections. Other bills now include Alaska’s SB 2, Arkansas’s HB 1041, Illinois’s SB 150, Maryland’s HB 525, Massachusetts’s HD 3373, Mississippi’s SB 2642, Missouri’s HB 673, Montana’s SB 25, Nebraska’s LB 615, New York’s A 235, South Carolina’s H 3517, Vermont’s S 23, and Virginia’s SB 775.
For example, S 23, a Vermont bill, bans a person from seeking to “publish, communicate, or otherwise distribute a synthetic media message that the person knows or should have known is a deceptive and fraudulent synthetic media of a candidate on the ballot.” According to the bill, synthetic media means content that creates “a realistic but false representation” of a candidate created or manipulated with “the use of digital technology, including artificial intelligence.”
Under this bill (and many others like it), if someone merely reposted a viral AI-generated meme of a presidential candidate that portrayed that candidate “saying or doing something that did not occur,” the candidate could sue the reposter to block them from sharing it further, and the reposter could face a substantial fine should the state pursue the case further. This would greatly burden private citizens’ political speech, and would burden candidates’ speech by giving political opponents a weapon to wield against each other during campaign season.
Because no reliable technology exists to detect whether media has been produced by AI, candidates can easily weaponize these laws to challenge all campaign-related media that they simply do not like. To cast a serious chill over electoral discourse, a motivated candidate need only file a bevy of lawsuits or complaints that raise the cost of speaking out to an unaffordable level.
Instead of voter outreach, political campaigning would turn into lawfare.
Concluding Thoughts
That’s a quick round-up of the AI-related legislation I’m seeing at the moment and how it impacts speech. We’ll keep you posted!
OKLAHOMA CITY — Oklahoma lawmakers filed hundreds of bills affecting education for the next legislative session.
Oklahoma Voice collected some of the top trends and topics that emerged in legislation related to students, teachers and schools. The state Legislature will begin considering bills once its 2025 session begins Feb. 3.
Bills would restrict minors’ use of cellphones and social media
A poster reads, “bell to bell, no cell” at the Jenks Public Schools Math and Science Center on Nov. 13. The school district prohibits student cellphone use during class periods. (Nuria Martinez-Keel/Oklahoam Voice)
As expected, lawmakers filed multiple bills to limit student cellphone use in public schools, an issue that leaders in both chambers of the Legislature have said is a top priority this year.
The House and Senate each have a bill that would prohibit students from using cellphones during the entire school day. Some Oklahoma schools already made this a requirement while others allow cellphone access in between classes.
Senate Bill 139 from Education Committee vice chair Sen. Ally Seifried, R-Claremore, would require all districts to ban students from accessing their cellphones from the morning bell until dismissal, and it would create a $2 million grant program to help schools enact phone-free policies.
Legislation from a House leader on education funding, Rep. Chad Caldwell, R-Enid, would prohibit student cellphone use while on school premises.
Multiple bills target children’s social media use. Sen. Kristen Thompson, R-Edmond, aims to ban social media accounts for anyone under 16 with SB 838 and, with SB 839, to deem social media addictive and dangerous for youth mental health.
A bill from Seifried would outlaw social media companies from collecting data from and personalizing content for a minor’s account, which a child wouldn’t be allowed to have without parent consent
SB 371 from Sen. Micheal Bergstron, R-Adair, would require districts to prohibit the use of social media on school computers or on school-issued devices while on campus. SB 932 from Sen. Darcy Jech, R-Kingfisher, would allow minors or their parents to sue a social media company over an “adverse mental health outcome arising, in whole or in part, from the minor’s excessive use of the social media platform’s algorithmically curated service.”
Its original author, Rep. Kevin West, R-Moore, refiled it as House Bill 1232. Sen. Shane Jett, R-Shawnee, and Sen. Dana Prieto, R-Tulsa, filed similar school chaplain bills with SB 486 and SB 590.
More restrictions suggested for sex education, gender expression
Another unsuccessful bill returning this year is legislation that would have families opt into sex education for their children instead of opting out, which is the state’s current policy.
Students wouldn’t be allowed to take any sex education course or hear a related presentation without written permission from their parents under SB 759 from Prieto, HB 1964 from Danny Williams, R-Seminole, and HB 1998 from Rep. Tim Turner, R-Kinta.
Sen. Dusty Deevers, R-Elgin, would have any reference to sex education and mental health removed from health education in schools with SB 702.
Prieto’s bill also would exclude any instruction about sexual orientation or gender identity from sex education courses. It would require school employees to notify a child’s parents before referring to the student by a different name or pronouns.
Other bills similarly would limit students’ ability to be called by a different name or set of pronouns at school if it doesn’t correspond to their biological sex.
Deevers’ Free to Speak Act would bar teachers from calling students by pronouns other than what aligns with their biological sex or by any name other than their legal name without parent consent. Educators and fellow students could not be punished for calling a child by their legal name and biological pronouns.
Rep. Gabe Woolley, R-Broken Arrow, filed a similar bill.
No public school could compel an employee or volunteer to refer to a student by a name or pronoun other than what corresponds with their sex at birth under SB 847 from Sen. David Bullard, R-Durant, nor could any printed or multimedia materials in a school refer to a student by another gender.
Corporal punishment in schools
Once again, Oklahoma lawmakers will consider whether to outlaw corporal punishment of students with disabilities. State law currently prohibits using physical pain as discipline on children with only the most significant cognitive disabilities.
In 2020, the state Department of Education used its administrative rules to ban corporal punishment on any student with a disability, but similar bills have failed to pass the state Legislature, drawing frustration from child advocates.
Sen. Dave Rader, R-Tulsa, was an author of last year’s bill to prohibit corporal punishment of students with any type of disability. He filed the bill again for consideration this session.
HB 2244 from Rep. John Waldron, D-Tulsa, would require schools to report to the Oklahoma State Department of Education the number of times they administer corporal punishment along with the age, race, gender and disability status of the students receiving it. The state Department of Education would then have to compile the information in a report to the Oklahoma Commission on Children and Youth.
Oklahoma Voice is part of States Newsroom, a nonprofit news network supported by grants and a coalition of donors as a 501c(3) public charity. Oklahoma Voice maintains editorial independence. Contact Editor Janelle Stecklein for questions: [email protected].
On October 26 and November 4, 2021, the House of Representatives passed H.R. 2119, the Family Violence Prevention and Services Improvement Act of 2021, and H.R. 3992, the Protect Older Job Applicants (POJA) Act of 2021, respectively. Both bills passed by a close bipartisan vote — the former by a vote of 228-200 and the latter 224-200 — and are supported by President Biden.
POJA Act
As originally written, the POJA Act amends the Age Discrimination in Employment Act of 1967 (ADEA) to extend the prohibition of limiting, segregating or classifying by employers of employees to job applicants. The bill comes after recent rulings in the Seventh and Eleventh Circuit Courts of Appeals that allow employers to use facially neutral hiring practices, which some have accused of being discriminatory against older workers. As such, the POJA Act amends the ADEA to make clear that the disparate impact provision in the original statute protects older “applicants for employment” in addition to those already employed.
Before the final vote on the bill, the House also adopted an amendment to the POJA Act that would require the Equal Employment Opportunity Commission to conduct a study on the number of job applicants impacted by age discrimination in the job application process and issue recommendations on addressing age discrimination in the job application process.
Family Violence Prevention and Services Improvement Act
The Family Violence Prevention and Services Improvement Act amends the Family Violence Prevention and Services Act to reauthorize and increase funding for programs focused on preventing family and domestic violence and protecting survivors. One provision addressing higher education authorizes the Secretary of Health and Human Services to now include institutions of higher education among the entities eligible for departmental grants to “conduct domestic violence, dating violence and family violence research or evaluation.”
Both the Family Violence Prevention and Services Improvement Act and the POJA Act now face the Senate where passage is uncertain as both require significant support from Republicans to bypass the sixty-vote filibuster threshold.
CUPA-HR will keep members apprised of any actions or votes taken by the Senate on these bills.