Tag: offensive

  • California wants to make platforms pay for offensive user posts. The First Amendment and Section 230 say otherwise.

    California wants to make platforms pay for offensive user posts. The First Amendment and Section 230 say otherwise.

    This week, FIRE wrote to California Governor Gavin Newsom, urging him to veto SB 771, a bill that would allow users and government enforcers to sue large social media platforms for enormous sums if their algorithms relay user-generated content that contributes to violation of certain civil rights laws.

    Obviously, platforms are going to have a difficult time knowing if any given post might later be alleged to have violated a civil rights law. So to avoid the risk of huge penalties, they will simply suppress any content (and user) that is hateful or controversial — even when it is fully protected by the First Amendment.

    And that’s exactly what the California legislature wants. In its bill analysis, the staff of the Senate Judiciary Committee chair made clear that their goal was not just to target unlawful speech, but to make platforms wary of hosting “hate speech” more generally:

    This cause of action is intended to impose meaningful consequences on social media platforms that continue to push hate speech . . . to provide a meaningful incentive for social media platforms to pay more attention to hate speech . . . and to be more diligent about not serving such content.

    Supporters have tried to evade SB 771’s First Amendment and Section 230 concerns, largely by obfuscating what the bill actually does. To hear them tell it, SB 711 doesn’t create any new liability, it just holds social media companies responsible if their algorithms aid and abet a violation of civil rights law, which is already illegal.

    But if you look just a little bit closer, that explanation doesn’t quite hold up. To understand why, it’s important to clarify what “aiding and abetting” liability is. Fortunately, the Supreme Court explained this just recently — and in a case also about social media algorithms to boot. 

    In Twitter v. Taamneh, the plaintiffs claimed that social media platforms had aided and abetted acts of terrorism by algorithmically arranging, promoting, and connecting users to ISIS content, and by failing to prevent ISIS from using their services after being made aware of the unlawful use.

    The Supreme Court ruled that they had not successfully made out a claim. Because aiding and abetting requires not just awareness of the wrongful goals, but also a “conscious intent to participate in, and actively further, the specific wrongful act.” All the social media platforms had done was create a communications infrastructure, which treated ISIS content just like any other content — and that is not enough.

    California law also requires knowledge, intent, and active assistance to be liable for aiding. But nobody really thinks the platforms have designed their algorithms to facilitate civil rights violations. So SB 771 has a problem. Under the existing standard, it’s never going to do anything, which is obviously not what its supporters intend. Therefore, they hope to create a new form of liability — recklessly aiding and abetting — for when platforms know there’s a serious risk of harm and choose to ignore it.

    But wait, there’s more.

    SB 771 also says that, by law, platforms are considered to have actual knowledge of how their algorithms interact with every user, including why every single piece of content will or will not be shown to them. This is just another way of saying that every platform knows there’s a chance users will be exposed to harmful content. All that’s left is for users to show that a platform consciously ignored that risk. 

    That will be trivially easy. Here’s the argument: the platform knew of the risk and still deployed the algorithm instead of trying to make it “safer.” 

    Soon, social media platforms will be liable solely for using an “unsafe” algorithm, even if they were entirely unaware of the offending content, let alone have any reason to think it’s unlawful.

    But the First Amendment requires that any liability for distributing speech must require the distributor to have knowledge of the expression’s nature and character. Otherwise, nobody would be able to distribute expression they haven’t inspected, which would “would tend to restrict the public’s access to [expression] the State could not constitutionally suppress directly.” Unfortunately for California, the very goal they want SB 771 to accomplish is what makes it unconstitutional.

    And this liability is not restricted to content recommendation algorithms (though it would still be unconstitutional if it were). SB 771 doesn’t define “algorithm” beyond the function of “relay[ing] content to users.” But every piece of content on social media, whether in a chronological or recommendation-based feed, is displayed to users using an algorithm. So SB 771 will impose liability every time any piece of content is shown on social media to any user.

    This is where Section 230 also has something to say. One of the most consequential laws governing the internet, Section 230 states, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider,” and prohibits states from imposing any liability inconsistent with it. In other words, the creator of the unlawful content is responsible for it, not the service they used to do so. Section 230 has been critical to the internet’s speech-enabling character. Without it, hosting the speech of others at any meaningful scale would be far too risky.

    SB 771 tries to make an end-run around Section 230 by providing that “deploying an algorithm that relays content to users may be considered to be an act of the platform independent from the message of the content relayed.” In other words, California is trying to redefine the liability: “we’re not treating you as the publisher of that speech, we’re just holding you liable for what your algorithm does.”

    But there can be no liability without the content relayed by the algorithm. By itself, the algorithm does not cause any harm recognized by law. It’s the user-generated content that causes the ostensible civil rights violation.

    And that’s not to mention the fact that because all social media content is relayed by algorithm, it would effectively nullify Section 230 by imposing liability on all content. California cannot evade federal law by waving a magic wand and declaring the thing Section 230 protects to be something else.

    Newsom has until October 13 to make a decision. If signed, the law takes effect on Jan. 1, 2027, and in the interim, other states will likely follow suit. The result will be a less free Internet, and less free speech — until the courts inevitably strike down SB 771 after costly, wasteful litigation. Newsom must not let it come to that. The best time to avoid violating the First Amendment is now. 

    The second best time is also now.

    Source link

  • Snitch hotlines for ‘offensive’ speech were a nightmare on campus — and now they’re coming to a neighborhood near you

    Snitch hotlines for ‘offensive’ speech were a nightmare on campus — and now they’re coming to a neighborhood near you

    We know the term “Orwellian” gets thrown around a lot these days. But if a government entity dedicated to investigating and even reeducating Americans for protected speech doesn’t deserve the label, nothing does.

    This step towards the Stasi isn’t hypothetical, either. It’s real. The governing bodies in question are called bias reporting systems, and the odds are they’re already chilling free expression on a campus near you. What’s worse, they aren’t staying there — now municipalities and states are using them, too.

    In this explainer, we’ll break down what bias reporting systems are, how they’ve spread beyond campus, and why they’re a threat to free speech.

    What are bias reporting systems?

    If you’ve been on campus in the last decade, you’ve likely heard of bias reporting systems — or, as they’re sometimes called, bias response teams. Their structure and terminology vary, but FIRE defines a campus bias reporting system as any system that provides:

    1. a formal or explicit process for or solicitation of
    2. reports from students, faculty, staff, or the community
    3. concerning offensive conduct or speech that is protected by the First Amendment or principles of expressive or academic freedom.

    Bias reporting systems generally solicit reports of bias against identity characteristics widely found in anti-discrimination laws. Western Washington University, for example, defines a “bias incident” as “language or an action that demonstrates bias against an individual or group of people based on actual or perceived race, color, creed, religion, national origin, sex, gender identity or expression, disability, sexual orientation, age, or veteran status.” Some systems also invite reports of bias against traits like “intellectual perspective,” “political expression,” and “political belief,” or have a catch-all provision for any other allegedly biased speech.

    Many colleges have bias response teams that consist not only of administrators but law enforcement. They often investigate complaints and summon accused students and faculty to meetings.

    The ability to speak freely is core to our democracy. Any system or protocol that stifles or inhibits free expression is antithetical to the principles and ideals of our institutions of higher education and our republic. 

    You might be wondering, “Don’t civil rights laws already cover this sort of thing?” Well, not quite. Bias reporting systems cover way more expressive ground than civil rights laws do, which puts these systems at odds with First Amendment protections. They generally define “bias” in such broad or vague terms that it could be applied to basically anything the complainant doesn’t like, including protected speech. This is doubly so when a school includes that vague and subjective word “hate” as another form of language or behavior worth reporting.

    That’s a problem at public colleges, which are bound by the First Amendment, and also at private colleges that voluntarily adopt First Amendment-like standards. Bias reporting systems completely ignore the fact that “hate speech” has no legal definition, and that unless a given expression clearly falls into one of the clearly-defined categories of unprotected speech, like true threats or incitement to immediate violence, it is almost certainly protected by the First Amendment. This remains so regardless of how anyone might feel about the speech itself.

    Bias Response Team Report 2017

    Reports

    The posture taken by many Bias Response Teams is likely to create profound risks to freedom of expression and academic freedom on campus.


    Read More

    These initiatives incentivize and in many cases encourage people to report each other for disfavored expression. As you can imagine, these systems often lead to unconstitutional infringements on protected student and faculty speech and chill expression on campus.

    For example, after the University of California, San Diego received bias incident reports about a student humor publication that satirized “safe spaces,” administrators asked the university’s lawyer to “think creatively” about how to address the newspaper, which they felt “crosse[d] the ‘free speech’ line.” And at Connecticut College, pro-Palestinian students were reported for flyers mimicking Israeli eviction notices to Palestinians, prompting an investigation by a dean.

    These are just a couple of instances where bias reporting systems have crossed the line. Sadly, there are plenty more, spanning FIRE’s research and commentary going back as far as 2016 — and none of them are good news.

    Sound Orwellian enough for you yet? Wait until you hear how bias reporting systems work off campus.

    Bias reporting systems have graduated from campus into everyday life

    Exporting campus bias reporting systems to wider society is a disastrous idea. No state should be employing de facto speech police. But of course, that hasn’t stopped state and city governments from trying.

    Bias reporting systems have been popping up in one form or another across more than a dozen state and city municipalities in the last four years, usually consisting of an online portal or telephone number where citizens are encouraged to submit reports.

    If you’re thinking this is just like the hate crime hotlines that many states have had for years, there is one important difference: namely, the word “crime.” While the new bias reporting systems will similarly accept reports of criminal acts, they also actively solicit reports of speech and behavior that are not only not crimes, but also First Amendment-protected expression.

    They know this, too.

    Vermont state police protocol, for instance, describes the information it compiles as being on “biased but protected speech.” This raises the obvious question of why the police are concerning themselves with Americans lawfully exercising their fundamental rights, and opens the door to police responses that violate those rights.

    Wherever they’ve popped up, these bias reporting systems have been bad news. Washington Free Beacon journalist Aaron Sibarium’s research has turned up a number of alarming examples. In Oregon, citizens can report “offensive ‘jokes’” and “imitating someone’s cultural norm or practice.”

    Meanwhile, in Maryland, the attorney general’s office states on its website that “people who engage in bias incidents may eventually escalate into criminal behavior,” which is why “Maryland law enforcement agencies are required by law to record and report data on both hate crimes and bias incidents.” But these speculative concerns do not justify the chilling effect bias reporting systems create. Not only do these systems solicit complaints about protected speech, they also cast an alarmingly wide net. It’s hard to believe, for instance, that many “offensive jokes” are reliable signs of future criminal activity.

    At this point you’d be forgiven for thinking that “Orwellian” is an understatement.

    But that’s not the worst of it. In Philadelphia — home of FIRE, the Liberty Bell, and the Constitution — authorities fielding “hate incidents” can now ask for exact addresses and various identifying details about the alleged offending party, including their names. According to Sibarium, city officials will in some cases “contact those accused of bias and request that they attend sensitivity training.”

    You heard that right. If you’re reported for a “non-criminal bias incident” in the city of Philadelphia, the city may request that you take a course meant to teach you the error of your ways. “If it is not a crime, we sometimes contact the offending party and try to do training so that it doesn’t happen again,” Saterria Kersey, a spokeswoman for the Philadelphia Commission on Human Relations, told Sibarium.

    The training is voluntary, but it reflects an unsettling level of government interference in the thoughts and opinions of the public.

    At this point you’d be forgiven for thinking that “Orwellian” is an understatement.

    Bias reporting systems are a threat to free speech on and off campus

    Thankfully, there has been some considerable pushback on bias reporting systems — though not entirely successful. Washington, for example, introduced a bill to create a statewide bias reporting system, but it failed to advance out of the Senate Ways and Means committee. However, a new version of the bill passed in March of 2024, and Washington is now set to establish a bias reporting system this year.

    The threat remains real, and the consequences of these speech-chilling initiatives are further-reaching than it might seem at first glance.

    On campus, the mere existence of bias reporting systems threatens one of the purposes of higher education, if not the purpose: the free exchange of ideas. Some courts have recognized that bias reporting systems may chill protected speech to such a degree that they violate the First Amendment.

    Bias reporting systems fundamentally undermine the First Amendment rights of not just students and faculty, but also ordinary citizens.

    The state-level reporting systems raise similar First Amendment issues — especially when law enforcement is involved. Like their campus counterparts, the state systems use expansive definitions of “bias” and “hate” that could encompass a vast range of protected expression, including speech on social or political issues.

    However, unconstitutionality isn’t the only concern. Even a bias reporting system that stays within constitutional bounds can deter people from freely expressing their thoughts and opinions. If they are afraid that the state will investigate them or place them in a government database just for saying something that offended another person, people will understandably hold their tongues and suppress their own voices. Moreover, the lack of clarity around what some states actually do with the reports they collect is itself chilling.

    The ability to speak freely is core to our democracy. Any system or protocol that stifles or inhibits free expression is antithetical to the principles and ideals of our institutions of higher education and our republic. In both word and deed, bias reporting systems fundamentally undermine these principles — and now seriously threaten the First Amendment rights of not just students and faculty, but also ordinary citizens.

    Source link