Tag: FIRE

  • VICTORY: District court blocks Texas social media law after FIRE lawsuit

    VICTORY: District court blocks Texas social media law after FIRE lawsuit

    AUSTIN, Texas, Feb. 7, 2025 — After a lawsuit from the Foundation for Individual Rights and Expression and Davis Wright Tremaine, a district court today stopped enforcement of a Texas law that would have blocked access to broad categories of protected speech for minors and forced websites to collect adults’ IDs or biometric data before they can access social media sites.

    Northern District of Texas Judge Robert Pitman granted FIRE’s motion for a preliminary injunction against provisions of the Securing Children Online through Parental Empowerment Act (SCOPE Act) requiring content monitoring and filtering, targeted advertising bans, and age-verification requirements, ruling that these measures were unconstitutionally overbroad, vague, and not narrowly tailored to serve a compelling state interest.

    “The court determined that Texas’s law was likely unconstitutional because its provisions restricted protected speech and were so vague that it made it hard to know what was prohibited,” said FIRE Chief Counsel Bob Corn-Revere. “States can’t block adults from engaging with legal speech in the name of protecting children, nor can they keep minors from ideas that the government deems unsuitable.”

    The SCOPE Act would have required social media platforms to register the age of every new user. Platforms would have been forced to track how much of their content is “harmful” to minors and, once a certain percentage is reached, force users to prove that they are 18 or older. In other words, the law would have burdened adults who wanted to view content that is fully legal for adults, serving as an effective ban for those who understandably don’t trust a third-party website with their driver’s license or fingerprints.

    The law also required websites to prevent minors from being exposed to “harmful material” that “promotes, glorifies, or facilitates” behaviors like drug use, suicide, or bullying. That definition was far too vague to pass constitutional muster: whether speech “promotes” or “glorifies” an activity is inherently subjective, and platforms had testified that they would be forced to react by censoring all discussions of those topics.

    Today’s ruling should serve as yet another warning to states tempted to jump on the unconstitutional bandwagon of social media age verification bills.

    “At what point… does alcohol use become ‘substance abuse?’” asked Judge Pitman in his ruling. “When does an extreme diet cross the line into an ‘eating disorder?’ What defines ‘grooming’ and ‘harassment?’ Under these indefinite meanings, it is easy to see how an attorney general could arbitrarily discriminate in his enforcement of the law.”

    FIRE sued on August 16 on behalf of three plaintiffs who use the Internet to communicate with young Texans and keep them informed on issues that affect them. A fourth plaintiff, M.F.,  is a 16-year-old rising high school junior from El Paso who is concerned that Texas is blocking his access to important content.

    Lead plaintiff Students Engaged in Advancing Texas represents a coalition of Texas students who seek to increase youth visibility and participation in policymaking.

    Nope to SCOPE: FIRE sues to block Texas’ unconstitutional internet age verification law

    Press Release

    Texans browsing your favorite websites, beware. If the state has its way, starting next month, the eyes of Texas may be upon you.


    Read More

    “Young people have free speech rights, too,” said SEAT Executive Director Cameron Samuels. “They’re also the future voters and leaders of Texas and America. The SCOPE Act would make youth less informed, less active, and less engaged on some of the most important issues facing the nation.”

    Earlier, Judge Pitman enjoined the content moderation requirements while ruling on a separate lawsuit from the Computer & Communications Industry Association and Netchoice. Judge Pitman ruled in August that Texas “cannot pick and choose which categories of protected speech it wishes to block teenagers from discussing online.”

    “This is a tremendous victory against government censorship, especially for our clients—ordinary citizens—who stood up to the State of Texas,” said Adam Sieff, partner at Davis Wright Tremaine. “The Court enjoined every substantive provision of the SCOPE Act we challenged, granting even broader relief than its first preliminary injunction. We hope this decision will give other states pause before broadly restricting free expression online.”

    Texas lawmakers perhaps could have predicted today’s ruling. Age verification laws have been enjoined by courts across the country in states like CaliforniaArkansasMississippiOhio, and even initially in Texas, in another law currently before the Supreme Court for review.

    “Today’s ruling should serve as yet another warning to states tempted to jump on the unconstitutional bandwagon of social media age verification bills,” said Corn-Revere. “What these laws have in common is that they seek to impose simplistic one-size-fits-all solutions to address complicated problems.” 


    The Foundation for Individual Rights and Expression (FIRE) is a nonpartisan, nonprofit organization dedicated to defending and sustaining the individual rights of all Americans to free speech and free thought — the most essential qualities of liberty. FIRE educates Americans about the importance of these inalienable rights, promotes a culture of respect for these rights, and provides the means to preserve them.

    CONTACT:

    Alex Griswold, Communications Campaign Manager, FIRE: 215-717-3473; media@thefire.org

     

    Source link

  • FIRE kicks off legislative season by opposing speech-restrictive AI bill

    FIRE kicks off legislative season by opposing speech-restrictive AI bill

    The legislative season is in full swing, and FIRE is already tackling a surge of speech-restrictive bills. We started with Washington’s House Bill 1170, which would require AI-generated content to include a disclosure.  

    FIRE Legislative Counsel John Coleman testified in opposition to the bill. In his testimony, John emphasized what FIRE has been saying for years, that the “government can no more compel an artist to disclose whether they created a painting from a human model as opposed to a mannequin than it can compel someone to disclose that they used artificial intelligence tools in creating an expressive work.” 

    Artificial intelligence, like earlier technologies such as the printing press, the camera, and the internet, has the power to revolutionize communication. The First Amendment protects the use of all these mediums for expression and forbids government interference under most circumstances. Importantly, the First Amendment protects not only the right to speak without fear of government retaliation but also the right not to speak. Government-mandated disclosures relating to speech, like those required under HB 1170, infringe on these protections and so are subject to heightened levels of First Amendment scrutiny. 

    FIRE remains committed to defending the free speech rights of all Americans and will continue to advocate against overbroad policies that stifle innovation and expression.

    Of course, as John stated, “Developers and users can choose to disclose their use of AI voluntarily, but government-compelled speech, whether that speech is an opinion or fact or even just metadata . . . undermines everyone’s fundamental autonomy to control their own expression.”

    In fact, the U.S. Court of Appeals for the Ninth Circuit (which includes Washington state) reiterated this fundamental principle just last year in X Corp. v. Bonta when it blocked a California law requiring social media platforms to publish information about their content moderation practices. Judge Milan D. Smith, Jr. acknowledged the government’s stated interest in transparency, but emphasized that “even ‘undeniably admirable goals’ ‘must yield’ when they ‘collide with the . . . Constitution.’”

    This principle is likely to put HB 1170 in significant legal jeopardy.

    FIRE statement on legislative proposals to regulate artificial intelligence

    News

    Existing laws and First Amendment doctrine already address the vast majority of concerns that legislators are seeking to address.


    Read More

    Another major problem with the policy embodied by HB 1170 is that it would apply to all AI-generated media rather than targeting a specific problem, like unlawful deceptive uses of AI, such as defamation. John pointed out to lawmakers that “if the intent of the bill is to root out deceptive uses of AI, this bill would do the opposite” by fostering the false impression that all AI-generated media is deceptive. In reality, AI-generated media — like all media — can be used to share both truth and falsehood. 

    Moreover, people using AI to commit actual fraud will likely find ways to avoid disclosing that AI was used, whether by removing evidence of AI use or using tools from states without disclosure requirements. As a result, this false content will appear more legitimate than it would in a world without the disclosures required by this bill because people will be more likely to believe that content lacking the mandated disclosure was not created with AI.

    Rather than preemptively imposing blanket rules that will stifle free expression, lawmakers should instead assess whether existing legal frameworks sufficiently address the concerns they have with AI. 

    FIRE remains committed to defending the free speech rights of all Americans and will continue to advocate against overbroad policies that stifle innovation and expression.

    Source link

  • FIRE statement on reports of forthcoming executive order on student visas and campus protests

    FIRE statement on reports of forthcoming executive order on student visas and campus protests

    President Donald Trump is expected to sign an executive order today threatening action against international students in the United States for their involvement in campus protests related to Israel and Hamas. 

    Per reports, President Trump promises to “quickly cancel the student visas of all Hamas sympathizers on college campuses, which have been infested with radicalism like never before,” and to deport students who joined “pro-jihadist protests.” 

    The revocation of student visas should not be used to punish and filter out ideas disfavored by the federal government. The strength of our nation’s system of higher education derives from the exchange of the widest range of views, even unpopular or dissenting ones.

    Students who commit crimes — including vandalism, threats, or violence — must face consequences, and those consequences may include the loss of a visa. But if today’s executive order reaches beyond illegal activity to instead punish students for protest or expression otherwise protected by the First Amendment, it must be withdrawn.

    Source link

  • FIRE to University of Texas at Dallas: Stop censoring the student press

    FIRE to University of Texas at Dallas: Stop censoring the student press

    The University of Texas at Dallas has a troubling history of trying to silence students. Now those students are fighting back.

    Today, the editors of The Retrograde published their first print edition, marking a triumphant return for journalism on campus in the face of administrative efforts to quash student press.

    Headlines above the fold of the first issue of The Retrograde, a new independent student newspaper at UT Dallas.

    Why call the newspaper The Retrograde? Because it’s replacing the former student newspaper, The Mercury, which ran into trouble when it covered the pro-Palestinian encampments on campus and shed light on UT Dallas’s use of state troopers (the same force that broke up UT Austin’s encampment just one week prior) and other efforts to quash even peaceful protest. As student journalists reported, their relationship with the administration subsequently deteriorated. University officials demoted the newspaper’s advisor and even removed copies of the paper from newsstands. At the center of this interference were Lydia Lum, director of student media, and Jenni Huffenberger, senior director of marketing and student media, whose titles reflect the university’s resistance to editorial freedom.

    The conflict between the paper and the administration came to a head when Lum called for a meeting of the Student Media Oversight Board, a university body which has the power to remove student leaders, accusing The Mercury’s editor-in-chief, Gregorio Olivares Gutierrez, of violating student media bylaws by having another form of employment, exceeding printing costs, and “bypassing advisor involvement.” Yet rather than follow those same bylaws, which offer detailed instructions for removing a student editor, Lum told board members from other student media outlets not to attend the meeting. A short-handed board then voted to oust Gutierrez. Adding insult to injury, Huffenberger unilaterally denied Gutierrez’s appeal, again ignoring the bylaws, which require the full board to consider any termination appeals.

    The student journalists of The Retrograde have shown incredible spirit. With your help, we can ensure their efforts — and the rights of all student journalists — are respected.

    In response, The Mercury’s staff went on strike, demanding Gutierrez’s reinstatement. To help in that effort, FIRE and the Student Press Law Center joined forces to pen a Nov. 12, 2024 letter calling for UT Dallas to honor the rights of the student journalists. We also asked them to pay the students the money they earned for the time they worked prior to the strike.

    UT Dallas refused to listen. Instead of embracing freedom of the press, the administration doubled down on censorship, ignoring both the students’ and our calls for justice.

    FIRE's advertisement in the first issue of the Retrograde student newspaper at UT Dallas. The headline reads: "FIRE Supports Student Journalism"

    FIRE took out a full page ad in support of The Retrograde at UT Dallas.

    In our letter, we argued that the university’s firing of Gutierrez was in retaliation for The Mercury’s unflattering coverage of the way administrators had handled the encampments. This is not even the first time UT Dallas has chosen censorship as the “best solution;” look no further than in late 2023 when they removed the “Spirit Rocks” students used to express themselves. Unfortunately, the university ignored both the students’ exhortations and FIRE’s demands, leaving UT Dallas without its newspaper. 

    But FIRE’s Student Press Freedom Initiative is here to make sure censorship never gets the last word.

    Students established The Retrograde, a fully independent newspaper. Without university resources, they have had to crowdfund and source their own equipment, working spaces, a new website, and everything else necessary to provide quality student-led journalism to the UT Dallas community. They succeeded, and FIRE is proud to support their efforts, placing a full-page ad in this week’s inaugural issue of The Retrograde.

    The fight for press freedom at UT Dallas is far from over — but we need your help to make a difference.

    Demand accountability from UT Dallas. The student journalists of The Retrograde have shown incredible spirit. With your help, we can ensure their efforts — and the rights of all student journalists — are respected.

    Source link

  • FIRE statement on Supreme Court’s ruling in TikTok v. Garland

    FIRE statement on Supreme Court’s ruling in TikTok v. Garland

    The Supreme Court today ruled that a federal law compelling TikTok’s parent company, ByteDance, to sell the social media platform or cease operations in the United States does not violate the First Amendment. The law functionally requires TikTok to shut down its operations by Jan. 19 absent some other accommodation.

    FIRE issued the following statement:

    Our unique national commitment to freedom of expression requires more caution than today’s ruling delivers. The unprecedented ban of a communication platform used by 170 million Americans demands strict judicial scrutiny, not the rushed and highly deferential review the Supreme Court instead conducted. 

    The Court explicitly notes the “inherent narrowness” of today’s decision. FIRE will hold it to that promise, and fight to contain the threat the ruling poses to our First Amendment rights. 

    Source link

  • Cosmetologists can’t shoot a gun? FIRE ‘blasts’ tech college for punishing student over target practice video

    Cosmetologists can’t shoot a gun? FIRE ‘blasts’ tech college for punishing student over target practice video

    Language can be complicated. According to Merriam-Webster, the verb “blast” has as many as 15 different meanings — “to play loudly,” “to hit a golf ball out of a sand trap with explosive force,” “to injure by or as if by the action of wind.”

    Recently, the word has added another definition to the list. Namely, “to attack vigorously” with criticism, as in, “to blast someone online” or “to put someone on blast.” This usage has becomecommon expression.

    That’s what Leigha Lemoine, a student at Horry-Georgetown Technical College, meant when she posted in a private Snapchat group that a non-student who had insulted her needed to get “blasted.” 

    But HGTC’s administration didn’t see it that way. When some students claimed they felt uncomfortable with Lemoine’s post, the college summoned her to a meeting. Lemoine explained that the post was not a threat of physical harm, but rather a simple expression of her belief that the person who had insulted her should be criticized for doing so. The school’s administrators agreed and concluded there was nothing threatening in her words.

    But two days later, things took a turn. Administrators discovered a video on social media of Lemoine firing a handgun at a target. The video was recorded off campus a year prior to the discovery, and had no connection to the “blasted” comment, but because she had not disclosed the video’s existence (why would she be required to?), the college decided to suspend her until the 2025 fall semester. Adding insult to injury, HGTC indicated she Lemoine would be on disciplinary probation when she returned. 

    Screenshots of Leigha Lemoine’s video on social media.

    HGTC administrators claim Lemoine’s post caused “a significant amount of apprehension related to the presence and use of guns.” 

    “In today’s climate, your failure to disclose the existence of the video, in conjunction with group [sic] text message on Snapchat where you used the term ‘blasted,’ causes concern about your ability to remain in the current Cosmetology cohort,” the college added.

    Never mind the context of the gun video, which had nothing to do with campus or the person she said needed to get “blasted.” HGTC was determined to jeopardize Lemoine’s future over one Snapchat message and an unrelated video. 

    Colleges and universities would do well to take Lemoine’s case as a reminder to safeguard the expressive freedoms associated with humor and hyperbolic statements. Because make no mistake, FIRE will continue to blast the ones that don’t.

    FIRE wrote to HGTC on Lemoine’s behalf on Oct. 7, 2024, urging the college to reverse its disciplinary action against Lemoine. We pointed out the absurdity of taking Lemoine’s “blasted” comment as an unprotected “true threat” and urged the college to rescind her suspension. Lemoine showed no serious intent to commit unlawful violence with her comment urging others to criticize an individual, and tying the gun video to the comment was both nonsensical and deeply unjust. 

    But HGTC attempted to blow FIRE off and plowed forward with its discipline. So we brought in the big guns — FIRE Legal Network member David Ashley at Le Clercq Law Firm took on the case, filing an emergency motion for a temporary restraining order. On Dec. 17, a South Carolina federal district court ordered HGTC to allow her to return to classes immediately while the case works its way through the courts

    Jokes and hyperbole are protected speech

    Colleges and universities must take genuine threats of violence on campus seriously. That sometimes requires investigations and quick institutional action to ensure campus safety. But HGTC’s treatment of Lemoine is the latest in a long line of colleges misusing the “true threats” standard to punish clearly protected speech — remarks or commentary that are meant as jokes, hyperbole, or otherwise unreasonable to treat as though they are sincere. 

    Take over-excited rhetoric about sports. In 2022, Meredith Miller, a student at the University of Utah, posted on social media that she would detonate the nuclear reactor on campus (a low-power educational model with a microwave-sized core that one professor said “can’t possibly melt down or pose any risk”) if the football team lost its game. Campus police arrested her, and the Salt Lake County District Attorney’s Office charged her with making a terroristic threat

    The office eventually dropped the charge, but the university tried doubling down by suspending her for two years. It was only after intervention from FIRE and an outside attorney that the university relented. But that it took such significant outside pressure — especially over a harmless joke that was entirely in line with the kind of hyperbolic rhetoric one expects in sports commentary — reveals how dramatically the university overreacted.

    Political rhetoric is often targeted as well. In 2020, Babson College professor Asheen Phansey found himself in hot water after posting a satirical remark on Facebook. After President Trump tweeted a threat that he might bomb 52 Iranian cultural sites, Phansey jokingly suggested that Iran’s leadership should publicly identify a list of American cultural heritage sites it wanted to bomb, including the “Mall of America” and the “Kardashian residence.” Despite FIRE’s intervention, Babson College’s leadership suspended Phansey and then fired him less than a day later. 

    Or consider an incident in which Louisiana State University fired a graduate instructor who left a heated, profanity-laced voicemail for a state senator in which he criticized the senator’s voting record on trans rights. The senator reported the voicemail to the police, who investigated and ultimately identified the instructor. The police closed the case after concluding that the instructor had not broken the law. You’re supposed to be allowed to be rude to elected officials. LSU nevertheless fired him.

    More examples of universities misusing the true threats standard run the political gamut: A Fordham student was suspended for a post commemorating the anniversary of the Tianneman Square massacre; a professor posted on social media in support of a police officer who attacked a journalist and was placed on leave; an adjunct instructor wished for President Trump’s assassination and had his hiring revoked; another professor posted on Facebook supporting Antifa, was placed on leave, and then sued his college. Too often, the university discipline is made more egregious by the fact that administrators continue to use the idea of “threatening” speech to punish clearly protected expression even after local police departments conclude that the statements in question were not actually threatening.

    What is a true threat?

    Under the First Amendment, a true threat is defined as a statement where “the speaker means to communicate a serious expression of an intent to commit an act of unlawful violence to a particular individual or group of individuals.” 

    That eliminates the vast majority of threatening speech you hear each day, and for good reason. One of the foundational cases for the true threat standard is Watts v. U.S., in which the Supreme Court ruled that a man’s remark about his potential draft into the military — “If they ever make me carry a rifle, the first man I want to get in my sights is LBJ” — constituted political hyperbole, not a true threat. The Court held that such statements are protected by the First Amendment. And rightfully so: Political speech is where the protection of the First Amendment is “at its zenith.” An overbroad definition of threatening statements would lead to the punishment of political advocacy. Look no further than controversies in the last year and a half over calls for genocide to see how wide swathes of speech would become punishable if the standard for true threats was lower. 

    Colleges and universities would do well to take Lemoine’s case as a reminder to safeguard the expressive freedoms associated with humor and hyperbolic statements. Because make no mistake, FIRE will continue to blast the ones that don’t.

    Source link

  • Meta’s content moderation changes closely align with FIRE recommendations

    Meta’s content moderation changes closely align with FIRE recommendations

    On Tuesday, Meta* CEO Mark Zuckerberg and Chief Global Affairs Officer Joel Kaplan announced sweeping changes to the content moderation policies at Meta (the owner of Facebook, Instagram, and Threads) with the stated intention of improving free speech and reducing “censorship” on its platforms. The changes simplify policies, replace the top-down fact-checking with a Community Notes-style system, reduce opportunities for false positives in automatic content flagging, and allow for greater user control of content feeds. All these changes mirror recommendations FIRE made in its May 2024 Report on Social Media.

    Given Meta’s platforms boast billions of users, the changes, if implemented, have major positive implications for free expression online.

    FIRE’s Social Media Report

    FIRE Report on Social Media 2024

    Reports

    With as many as 5.17 billion accounts worldwide, social media is the most powerful tool in history for average citizens to express themselves.


    Read More

    In our report, we promoted three principles to improve the state of free expression on social media:

    1. The law should require transparency whenever the government involves itself in social media moderation decisions.
    2. Content moderation policies should be transparent to users, who should be able to appeal moderation decisions that affect them.
    3. Content moderation decisions should be unbiased and should consistently apply the criteria that a platform’s terms of service establish.

    Principle 1 is the only one where FIRE believes government intervention is appropriate and constitutional (and we created a model bill to that effect). Principles 2 and 3 we hoped would enjoy voluntary adoption by social media platforms that wanted to promote freedom of expression. 

    While we don’t know whether these principles influenced Meta’s decision, we’re pleased the promised changes align very well with FIRE’s proposals for how a social media platform committed to free expression could put that commitment into practice.

    Meta’s changes to content moderation structures

    With a candid admission that it believes 10-20% of its millions of daily content removals are mistakes, Meta announced it is taking several actions to expand freedom of expression on the platform. The first is simplification and scaling back of its rules on the boundaries of discourse. According to Zuckerberg and Kaplan:

    [Meta is] getting rid of a number of restrictions on topics like immigration, gender identity and gender that are the subject of frequent political discourse and debate. It’s not right that things can be said on TV or the floor of Congress, but not on our platforms. These policy changes may take a few weeks to be fully implemented. 

    While this is promising in and of itself, it will be enhanced by a broad change to the automated systems for content moderation. Meta is restricting its automated flagging to only the most severe policy violations. For lesser policy violations, a user will have to manually report a post for review and possible removal. Additionally, any removal will require the agreement of multiple human reviewers.

    This is consistent with our argument that AI-driven and other automated flagging systems will invariably have issues with false-positives, making human review critical. Beyond removals, Meta is increasing the confidence threshold required for deboosting a post suspected of violating policy.

    Who fact-checks the fact checkers?

    Replacing top-down fact-checking with a bottom-up approach based on X’s Community Notes feature may be just about the biggest change announced by Meta. As FIRE noted in the Social Media Report: 

    Mark Zuckerberg famously said he didn’t want Facebook to be the “arbiter of truth.” But, in effect, through choosing a third-party fact checker, Facebook becomes the arbiter of the arbiter of truth. Given that users do not trust social media platforms, this is unlikely to engender trust in the accuracy of fact checks.

    Zuckerberg similarly said in the announcement that Meta’s“fact checkers have just been too politically biased, and have destroyed more trust than they’ve created.” 

    Our Social Media Report argued that the Community Notes feature is preferable to top-down fact-checking, because a community of diverse perspectives will likely be “less vulnerable to bias and easier for users to trust than top-down solutions that may reflect the biases of a much smaller number of stakeholders.” Additionally, we argued labeling is more supportive of free expression, being a “more speech” alternative to removal and deboosting.

    We are eager to see the results of this shift. At a minimum, experimentation and innovation in content moderation practices provides critical experience and data to guide future decisions and help platforms improve reliability, fairness, and responsiveness to users.

    User trust and the appearance of bias

    An overall theme in Zuckerberg and Kaplan’s remarks is that biased decision-making has eroded user trust in content moderation at Meta, and these policy changes are aimed at regaining users’ trust. As FIRE argued in our Social Media Report:

    In the case of moderating political speech, any platform that seeks to promote free expression should develop narrow, well-defined, and consistently enforceable rules to minimize the kind of subjectivity that leads to arbitrary and unfair enforcement practices that reduce users’ confidence both in platforms and in the state of free expression online.

    We also argued that perception of bias and flexibility in rules encourages powerful entities like government actors to “work the refs,” including through informal pressure, known as “jawboning.”

    What is jawboning? And does it violate the First Amendment?

    Issue Pages

    Indirect government censorship is still government censorship — and it must be stopped.


    Read More

    Additionally, when perceived bias drives users to small, ideologically homogeneous alternative platforms, the result can damage broader discourse:

    If users believe their “side” is censored unfairly, many will leave that platform for one where they believe they’ll have more of a fair shake. Because the exodus is ideological in nature, it will drive banned users to new platforms where they are exposed to fewer competing ideas, leading to “group polarization,” the well-documented phenomenon that like-minded groups become more extreme over time. Structures on all social media platforms contribute to polarization, but the homogeneity of alternative platforms turbocharges it.

    These are real problems, and it is not clear whether Meta’s plans will succeed in addressing them, but it is welcome to see them recognized.

    International threats to speech

    Our Social Media Report expressed concern that the Digital Services Act — the broad EU regulation mandating censorship on social media far beyond what U.S. constitutional law allows — would become a least common denominator approach for social media companies, even in the United States. Mark Zuckerberg seems to announce his intention to do no such thing, stating he planned to work with President Trump to push back on “governments around the world” that are “pushing [companies] to censor more.”

    While we are pleased at the implication that Meta’s platforms will seemingly not change their free expression policies in America at the behest of the EU, the invocation of a social media company working with any government, including the United States government, rings alarm bells for any civil libertarian. We will watch this development closely for that reason. 

    FIRE has often said — and it often bears repeating — the greatest threat to freedom of expression will always come from the government, and as Zuckerberg himself notes, the government has in years past pushed Meta to remove content.

    When the rubber meets the road

    Meta’s commitment to promote freedom of expression on its platforms offers plenty of reasons for cautious optimism. 

    But we do want to emphasize caution. There is, with free expression, often a large gap between stated intentions and what happens when theory meets practice. As a civil liberties watchdog, our duty is to measure promise against performance.

    Take, for example, our measured praise for Elon Musk’s stated commitment to free expression, followed by our frequent criticism when he failed to live up to that commitment. And that criticism hasn’t kept us from giving credit when due to X, such as when it adopted Community Notes. 

    Similarly, FIRE stands ready to help Meta live up to its stated commitments to free expression. You can be sure that we will watch closely and hold them accountable.

    * Meta has donated to FIRE.

    Source link

  • FIRE statement on legislative proposals to regulate artificial intelligence

    FIRE statement on legislative proposals to regulate artificial intelligence

    As the 2025 legislative calendar begins, FIRE is preparing for lawmakers at both the state and federal levels to introduce a deluge of bills targeting artificial intelligence. 

    The First Amendment applies to artificial intelligence just as it does to other expressive technologies. Like the printing press, the camera, and the internet, AI can be used as an expressive tool — a technological advance that helps us communicate with one another and generate knowledge. As FIRE Executive Vice President Nico Perrino argued in The Los Angeles Times last month: “The Constitution shouldn’t be rewritten for every new communications technology.” 

    We again remind legislators that existing laws — cabined by the narrow, well-defined exceptions to the First Amendment’s broad protection — already address the vast majority of harms legislatures may seek to counter in the coming year. Laws prohibiting fraud, forgery, discrimination, and defamation, for example, apply regardless of how the unlawful activity is ultimately carried out. Liability for unlawful acts properly falls on the perpetrator of those acts, not the informational or communicative tools they use. 

    Some legislative initiatives seeking to govern the use of AI raise familiar First Amendment problems. For example, regulatory proposals that would require “watermarks” on artwork created by AI or mandate disclaimers on content generated by AI violate the First Amendment by compelling speech. FIRE has argued against these kinds of efforts to regulate the use of AI, and we will continue to do so — just as we have fought against government attempts to compel speech in school, on campus, or online

    Rather than compelling disclaimers or imposing content-based restrictions on AI-generated expression, legislators should remember the law already protects against defamation, fraud, and other illegal conduct. 

    Lawmakers have also sought to regulate or even criminalize the use of AI-generated content in election-related communications. But courts have been wary of legislative attempts to control AI’s output when political speech is implicated. Following a First Amendment challenge from a satirist who uses AI to generate parodies of political figures, for example, a federal district court recently enjoined a California statute aimed at “deepfakes” that regulated “materially deceptive” election-related content. 

    Content-based restrictions like California’s law require strict judicial scrutiny, no matter how the expression is created. As the federal court noted, the constitutional protections “safeguarding the people’s right to criticize government and government officials apply even in the new technological age when media may be digitally altered.” So while lawmakers might harbor “a well-founded fear of a digitally manipulated media landscape,” the court explained, “this fear does not give legislators unbridled license to bulldoze over the longstanding tradition of critique, parody, and satire protected by the First Amendment.” 

    Artificial intelligence, free speech, and the First Amendment

    Issue Pages

    FIRE offers an analysis of frequently asked questions about artificial intelligence and its possible implications for free speech and the First Amendment.


    Read More

    Other legislative proposals threaten the First Amendment by imposing burdens directly on the developers of AI models. In the coming months, for example, Texas lawmakers will consider the Texas Responsible Artificial Intelligence Governance Act, or TRAIGA, a sweeping bill that would impose liability on developers, distributors, and deployers of AI systems that may introduce a risk of “algorithmic discrimination,” including by private actors. The bill vests broad regulatory authority in a newly created state “Artificial Intelligence Council” and imposes steep compliance costs. TRAIGA compels developers to publish regular risk reports, a requirement that will raise First Amendment concerns when applied to an AI model’s expressive output or the use of AI as a tool to facilitate protected expression. Last year, a federal court held a similar reporting requirement imposed on social media platforms was likely unconstitutional.

    TRAIGA’s provisions incentivize AI developers to handicap their models to avoid any possibility of offering recommendations that some might deem discriminatory or simply offensive — even if doing so curtails the models’ usefulness or capabilities. Addressing unlawful discrimination is an important legislative aim, and lawmakers are obligated to ensure we all benefit from the equal protection of the law. At the same time, our decades of work defending student and faculty rights has left FIRE all too familiar with the chilling effect on speech that results from expansive or arbitrary interpretations of anti-discrimination law on campus. We will oppose poorly crafted legislative efforts that would functionally build the same chill into artificial intelligence systems.

    The sprawling reach of legislative proposals like TRAIGA run headlong into the expressive rights of the people building and using AI models. Rather than compelling disclaimers or imposing content-based restrictions on AI-generated expression, legislators should remember the law already protects against defamation, fraud, and other illegal conduct. And rather than preemptively saddling developers with broad liability for an AI model’s possible output, lawmakers must instead examine the recourse existing laws already provide victims of discrimination against those who would use AI — or any other communicative tool — to unlawful ends.

    FIRE will have more to say on the First Amendment threats presented by legislative proposals regarding AI in the weeks and months to come.

    Source link

  • FIRE to defend veteran pollster J. Ann Selzer in Trump lawsuit over outlier election poll

    FIRE to defend veteran pollster J. Ann Selzer in Trump lawsuit over outlier election poll

    DES MOINES, Iowa, Jan. 7, 2025 — The Foundation for Individual Rights and Expression announced today it will defend veteran Iowa pollster J. Ann Selzer pro bono against a lawsuit from President-elect Donald Trump that threatens Americans’ First Amendment right to speak on core political issues.

    “Punishing someone for their political prediction is about as unconstitutional as it gets,” said FIRE Chief Counsel Bob Corn-Revere. “This is America. No one should be afraid to predict the outcome of an election. Whether it’s from a pollster, or you, or me, such political expression is fully and unequivocally protected by the First Amendment.”

    EXPLAINER: FIGHTING TRUMP’S LAWSUIT IS FIRST AMENDMENT 101

    Trump’s lawsuit stems from a poll Selzer published before the 2024 presidential election that predicted Vice President Kamala Harris leading by three points in Iowa. The lawsuit, brought under Iowa’s Consumer Fraud Act, is meritless and violates long-standing constitutional principles.

    The claim distorts the purpose of consumer fraud laws, which target sellers who make false statements to get you to buy merchandise. 

    “Consumer fraud laws are about the scam artist who rolls back the odometer on a used car, not a newspaper pollster or TV meteorologist who misses a forecast,” said FIRE attorney Conor Fitzpatrick.

    Trump’s suit seeks damages and a court order barring the newspaper from publishing any future “deceptive polls” that might “poison the electorate.” But Selzer and The Des Moines Register were completely transparent about how the poll was conducted. Selzer and the newspaper released the demographic breakdowns showing the results of the telephone survey and the weighting system. Selzer also released an analysis of how her methods might have contributed to missing the mark. 

    “I’ve spent my career researching what the people of Iowa are thinking about politics and leading issues of the day,” Selzer said. “My final poll of the 2024 general election missed the mark. The response to a mismatch between my final poll and the decisions Iowa voters made should be thoughtful analysis and introspection. I should be devoting my time to that and not to a vengeful lawsuit from someone with enormous power and assets.”

    Selzer’s Iowa polls have long enjoyed “gold standard” status among pollsters. She correctly predicted Trump’s win in Iowa in 2016 and 2020 using the same methodology in her 2024 poll.

    COURTESY PHOTOS OF J. ANN SELZER FOR MEDIA USE

    “Donald Trump is abusing the legal system to punish speech he dislikes,” said FIRE attorney Adam Steinbaugh. “If you have to pay lawyers and spend time in court to defend your free speech, then you don’t have free speech.”

    America already rejected its experiment with making the government the arbiter of truth. President John Adams used the Sedition Act of 1798 to imprison political rivals for “false” political statements. Trump’s lawsuit is just a new spin on the same theory long rejected under the First Amendment.

    The lawsuit fits the very definition of a “SLAPP” suit — a Strategic Lawsuit Against Public Participation. Such tactical claims are filed purely for the purpose of harassing and imposing punishing litigation costs on perceived opponents, not because they have any merit or stand any chance of success. In other words, the lawsuit is the punishment. As Trump once colorfully put it after losing a lawsuit: “I spent a couple of bucks on legal fees, and they spent a whole lot more. I did it to make his life miserable, which I’m happy about.”

    By providing pro bono support, FIRE is helping to remove the punishment-by-process incentive of SLAPP suits — just as we’ve done when a wealthy Idaho landowner sued over criticism of his planned airstrip, when a Pennsylvania lawmaker sued a graduate student for “racketeering,” and when an education center threatened to sue a small, autistic-led, nonprofit organization for criticizing the center’s use of electric shocks.

    “Pollsters don’t always get it right,” said Fitzpatrick. “When the Chicago Tribune published its famously incorrect ‘Dewey Defeats Truman’ headline, it was because the polls were off. Truman didn’t sue the newspaper. He laughed — his victory was enough. That’s how you handle missed predictions in a free society.

    The Foundation for Individual Rights and Expression (FIRE) is a nonpartisan, nonprofit organization dedicated to defending and sustaining the individual rights of all Americans to free speech and free thought — the most essential qualities of liberty. FIRE defends free speech for all Americans, regardless of political ideology. We’ll defend your rights whether you’re a student barred from wearing a “Let’s Go Brandon” sweatshirt, a professor censored under Florida’s STOP WOKE Act, or a mother arrested for criticizing your city’s mayor. If it’s protected, we’ll defend it. No throat-clearing, no apologies.

    CONTACT:

    Daniel Burnett, Senior Director of Communications, FIRE: 215-717-3473; media@thefire.org

    Source link