Tag: wave

  • Wave of state-level AI bills raise First Amendment problems

    Wave of state-level AI bills raise First Amendment problems

    AI is enhancing our ability to communicate, much like the printing press and the internet did in the past. And lawmakers nationwide are rushing to regulate its use, introducing hundreds of bills in states across the country.  Unfortunately, many AI bills we’ve reviewed would violate the First Amendment — just as FIRE warned against last month. It’s worth repeating that First Amendment doctrine does not reset itself after each technological advance. It protects speech created or modified with artificial intelligence software just as it does to speech created without it.

    On the flip side, AI’s involvement doesn’t change the illegality of acts already forbidden by existing law. There are some narrow, well-defined categories of speech not protected by the First Amendment — such as fraud, defamation, and speech integral to criminal conduct — that states can and do already restrict. In that sense, the use of AI is already regulated, and policymakers should first look to enforcement of those existing laws to address their concerns with AI. Further restrictions on speech are both unnecessary and likely to face serious First Amendment problems, which I detail below.

    Constitutional background: Watermarking and other compelled disclosure of AI use

    We’re seeing a lot of AI legislation that would require a speaker to disclose their use of AI to generate or modify text, images, audio, or video. Generally, this includes requiring watermarks on images created with AI, mandating disclaimers in audio and video generated with AI, and forcing developers to add metadata to images created with their software. 

    Many of these bills violate the First Amendment by compelling speech. Government-compelled speech—whether that speech is an opinion, or fact, or even just metadata—is generally anathema to the First Amendment. That’s for good reason: Compelled speech undermines everyone’s right to conscience and fundamental autonomy to control their own expression.

    To illustrate: Last year, in X Corp. v. Bonta, the U.S. Court of Appeals for the Ninth Circuit  reviewed a California law that required social media companies to post and report information about their content moderation practices. FIRE filed an amicus curiae — “friend of the court” — brief in that case, arguing the posting and reporting requirements unconstitutionally compel social media companies to speak about topics on which they’d like to remain silent. The Ninth Circuit agreed, holding the law was likely unconstitutional. While acknowledging the state had an interest in providing transparency, the court reaffirmed that “even ‘undeniably admirable goals’ ‘must yield’ when they ‘collide with the . . . Constitution.’”

    There are (limited) exceptions to the principle that the state cannot compel speech. In some narrow circumstances, the government may compel the disclosure of information. For example, for speech that proposes a commercial transaction, the government may require disclosure of uncontroversial, purely factual information to prevent consumer deception. (For example, under this principle, the D.C. Circuit allowed federal regulators to require disclosure of country-of-origin information about meat products.) 

    But none of those recognized exceptions would permit the government to mandate blanket disclosure of AI-generated or modified speech. States seeking to require such disclosures will face heightened scrutiny beyond what is required for commercial speech.

    AI disclosure and watermarking bills

    This year, we’re also seeing lawmakers introduce many bills that require certain disclosures whenever speakers use AI to create or modify content, regardless of the nature of the content. These bills include Washington’s HB 1170, Massachusetts’s HD 1861, New York’s SB 934, and Texas’s SB 668.

    At a minimum, the First Amendment requires these kinds of regulations to be tailored to address a particular state interest. But these bills are not aimed at any specific problem at all, much less being tailored to it; instead, they require nearly all AI-generated media to bear a digital disclaimer. 

    For example, FIRE recently testified against Washington’s HB 1170, which requires covered providers of AI to include in any AI-generated images, videos, or audio a latent disclosure detectable by an AI detection tool that the bill also requires developers to offer.

    Of course, developers and users can choose to disclose their use of AI voluntarily. But bills like HB 1170 force disclosure in constitutionally suspect ways because they aren’t aimed at furthering any particular governmental interest and they burden a wide range of speech.

    Because no reliable technology exists to detect whether media has been produced by AI, candidates can easily weaponize these laws to challenge all campaign-related media that they simply do not like. 

    In fact, if the government’s goal is addressing fraud or other unlawful deception, there are ways these disclosures could make things worse. First, the disclosure requirement will taint the speech of non-malicious AI users by fostering the false impression that their speech is deceptive, even if it isn’t. Second, bad actors can and will find ways around the disclosure mandate — including using AI tools in other states or countries, or just creating photorealistic content through other means. False content produced by bad actors will then have a much greater imprimatur of legitimacy than it would in a world without the disclosures required by this bill, because people will assume that content lacking the mandated disclosure was not created with AI.

    Constitutional background: Categorical ‘deepfake’ regulations

    A handful of bills introduced this year seek to categorically ban “deepfakes.” In other words, these bills would make it unlawful to create or share AI-generated content depicting someone saying or doing something that the person did not in reality say or do.

    Categorical exceptions to the First Amendment exist, but these exceptions are few, narrow, and carefully defined. Take, for example, false or misleading speech. There is no general First Amendment exception for misinformation or disinformation or other false speech. Such an exception would be easily abused to suppress dissent and criticism.

    There are, however, narrow exceptions for deceptive speech that constitutes fraud, defamation, or appropriation. In the case of fraud, the government can impose liability on speakers who knowingly make factual misrepresentations to obtain money or some other material benefit. For defamation, the government can impose liability for false, derogatory speech made with the requisite intent to harm another’s reputation. For appropriation, the government can impose liability for using another person’s name or likeness without permission, for commercial purposes.

    Misinformation versus disinformation, explained

    Issue Pages

    Confusingly, the terms are used interchangeably. But they are different — and the distinction matters.


    Read More

    Like an email message or social media post, AI-generated content can fall under one of these categories of unprotected speech, but the Supreme Court has never recognized a categorical exception for creating photorealistic images or video of another person. Context always matters.

    Although some people will use AI tools to produce unlawful or unprotected speech, the Court has never permitted the government to institute a broad technological ban that would stifle protected speech on the grounds that the technology has a potential for misuse. Instead, the government must tailor its regulation to the problem it’s trying to solve — and even then, the regulation will still fail judicial scrutiny if it burdens too much protected speech.

    AI-generated content has a wide array of potential applications, spanning from political commentary and parody to art, entertainment, education, and outreach. Users have deployed AI technology to create political commentary, like the viral deepfake of Mark Zuckerberg discussing his control over user data — and for parody, as seen in the Donald Trump pizza commercial and the TikTok account dedicated to satirizing Tom Cruise. In the realm of art and entertainment, the Dalí Museum used deepfake technology to bring the artist back to life, and the TV series “The Mandalorian” recreated a young Luke Skywalker. Deepfakes have even been used for education and outreach, with a deepfake of David Beckham raising awareness about malaria.

    These examples should not be taken to suggest that AI is always a positive force for shaping public discourse. It’s not. But not only will categorical bans on deepfakes restrict protected expression such as the examples above, they’ll face — and are highly unlikely to survive — the strictest judicial scrutiny under the First Amendment.

    Categorical deepfake prohibition bills

    Bills with categorical deepfake prohibitions include North Dakota’s HB 1320 and Kentucky’s HB 21.

    North Dakota’s HB 1320, a failed bill that FIRE opposed, is a clear example of what would have been an unconstitutional categorical ban on deepfakes. The bill would have made it a misdemeanor to “intentionally produce, possess, distribute, promote, advertise, sell, exhibit, broadcast, or transmit” a deepfake without the consent of the person depicted. It defined a deepfake as any digitally-altered or AI-created “video or audio recording, motion picture film, electronic image, or photograph” that deceptively depicts something that did not occur in reality and includes the digitally-altered or AI-created voice or image of a person.

    This bill was overly broad and would criminalize vast amounts of protected speech. It was so broad that it would be like making it illegal to paint a realistic image of a busy public park without obtaining everyone’s consent. Why make it illegal for that same painter to take their realistic painting and bring it to life with AI technology?

    Artificial intelligence, free speech, and the First Amendment

    Issue Pages

    FIRE offers an analysis of frequently asked questions about artificial intelligence and its possible implications for free speech and the First Amendment.


    Read More

    HB 1320 would have prohibited the creation and distribution of deepfakes regardless of whether they cause actual harm. But, as noted, there isn’t a categorical exception to the First Amendment for false speech, and deceptive speech that causes specific, targeted harm to individuals is already punishable under narrowly defined First Amendment exceptions. If, for example, someone creates and distributes to other people a deepfake showing someone doing something they didn’t in reality do, thus effectively serving as a false statement of fact, the depicted individual could sue for defamation if they suffered reputational harm. But this doesn’t require a new law.

    Even if HB 1320 were limited to defamatory speech, enacting new, technology-specific laws where existing, generally applicable laws already suffice risks sowing confusion that will ultimately chill protected speech. Such technology-specific laws are also easily rendered obsolete and ineffective by rapidly advancing technology.

    HB 1320’s overreach clashed with clear First Amendment protections. Fortunately, the bill failed to pass.

    Constitutional background: Election-related AI regulations

    Another large bucket of bills that we’re seeing would criminalize or create civil liability for the use of AI-generated content in election-related communications, without regard to whether the content is actually defamatory.

    Like categorical bans on AI, regulations of political speech have serious difficulty passing constitutional muster. Political speech receives strong First Amendment protection and the Supreme Court has recognized it as essential for our system of government: “Discussion of public issues and debate on the qualifications of candidates are integral to the operation of the system of government established by our Constitution.”

    Under strict scrutiny, prohibitions or restrictions on AI-modified or generated media relating to elections will face an uphill battle.

    As noted above, the First Amendment protects a great deal of false speech, so these regulations will be subject to strict scrutiny when challenged in court. This means the government must prove the law is necessary to serve a compelling state interest and is narrowly tailored to achieving that interest. Narrow tailoring in strict scrutiny requires that the state meet its interest using the least speech-restrictive means.

    This high bar protects the American people from poorly tailored regulations of political speech that chill vital forms of political discourse, including satire and parody. Vigorously protecting free expression ensures robust democratic debate, which can counter deceptive speech more effectively than any legislation.

    Under strict scrutiny, prohibitions or restrictions on AI-modified or generated media relating to elections will face an uphill battle. No elections in the United States have been decided, or even materially impacted, by any AI-generated media, so the threat — and the government’s interest in addressing it — remains hypothetical. Even if that connection was established, many of the current bills are not narrowly tailored; they would burden all kinds of AI-generated political speech that poses no threat to elections. Meanwhile, laws against defamation already provide an alternative means for candidates to address deliberate lies that harm them through reputational damage.

    Already, a court has blocked one of these laws on First Amendment grounds. In a First Amendment challenge from a satirist who uses AI to generate parodies of political figures, a federal court recently applied strict scrutiny and blocked a California statute aimed at “deepfakes” that regulated “materially deceptive” election-related content.

    Election-related AI bills

    Unfortunately, many states have jumped on the bandwagon to regulate AI-generated media relating to elections. In December, I wrote about two bills in Texas — HB 556 and HB 228 — that would criminalize AI-generated content related to elections. Other bills now include Alaska’s SB 2, Arkansas’s HB 1041, Illinois’s SB 150, Maryland’s HB 525, Massachusetts’s HD 3373, Mississippi’s SB 2642, Missouri’s HB 673, Montana’s SB 25, Nebraska’s LB 615, New York’s A 235, South Carolina’s H 3517, Vermont’s S 23, and Virginia’s SB 775.

    For example, S 23, a Vermont bill, bans a person from seeking to “publish, communicate, or otherwise distribute a synthetic media message that the person knows or should have known is a deceptive and fraudulent synthetic media of a candidate on the ballot.” According to the bill, synthetic media means content that creates “a realistic but false representation” of a candidate created or manipulated with “the use of digital technology, including artificial intelligence.”

    Under this bill (and many others like it), if someone merely reposted a viral AI-generated meme of a presidential candidate that portrayed that candidate “saying or doing something that did not occur,” the candidate could sue the reposter to block them from sharing it further, and the reposter could face a substantial fine should the state pursue the case further. This would greatly burden private citizens’ political speech, and would burden candidates’ speech by giving political opponents a weapon to wield against each other during campaign season. 

    Because no reliable technology exists to detect whether media has been produced by AI, candidates can easily weaponize these laws to challenge all campaign-related media that they simply do not like. To cast a serious chill over electoral discourse, a motivated candidate need only file a bevy of lawsuits or complaints that raise the cost of speaking out to an unaffordable level.

    Instead of voter outreach, political campaigning would turn into lawfare.

    Concluding Thoughts

    That’s a quick round-up of the AI-related legislation I’m seeing at the moment and how it impacts speech. We’ll keep you posted!



    Source link

  • The Realty behind the wave function and Relativity

    The Realty behind the wave function and Relativity

    Please follow and like us:

    Einstein’s Explanation of the Unexplainable

    One can define reality as the world or the state of things as they actually exist, as opposed to an idealistic or notional idea of them.

    Currently there are two ways science attempts to explain and define the reality of our universe. The first is Quantum mechanics or the branch of physics defines its evolution in terms of the probabilities associated with the wave function. The other is the deterministic environment of Relativity which defines it in terms of a physical interaction between space and time.

    Specifically, Relativity would define the observable positions of particles in terms of where the point defining their center of mass is located.

    While quantum mechanics uses the mathematical interpretation of the wave function to define the most probable position of a particle when observed.

    Since we all live in the same world you would expect the probabilistic approach of quantum mechanics to be compatible with the deterministic one of Einstein. Unfortunately, they define two different worlds which appear to be incompatible. One defines existence in terms of the probabilities while the other defines it in terms of the deterministic of properties of space and time.

    However, to show why those probabilities appear to be incompatible with Relativity’s determinism even though they are NOT it will be necessary to explain the evolution of quantum environment in terms of a deterministic interaction between the components of a space-time environment.

    For example, when we role dice in a casino most of us realize the probability of a six appearing is related to or is caused by its physical interaction with properties of the table in the casino where it is rolled. Putting it another way what defines the fact that six appears is NOT the probability of getting one but the interaction of the dice with the table and the casino it occupies.

    This suggests to show the “reality” behind the wave function one MUST explain how its environment evolves in terms of how the physical components of space-time interact to define a particles position.

    The fact that Relativity defines evolution of space-time in terms of the energy propagated by electromagnetic wave while Quantum Mechanics defines it in terms of the mathematical evolution of the wave function give us a starting point. This is because it suggests the evolution in both is defined in define by a wave.

    To define the position of a particle in terms of the deterministic properties of Relativity one can use the science of wave mechanics along with the fact Relativity tells us an electromagnetic wave move continuously through space-time unless it is prevented from doing so by someone observing or something interacting with it. This would result in its energy being confined to three-dimensional space. The science of wave mechanic also tells us the three-dimensional “walls” of this confinement will result in its energy being reflected back on itself thereby creating a resonant or standing wave in three-dimensional space. This would cause its wave energy to COLLAPSE and be concentrated at the point in space were a particle would be found. Additionally, wave mechanics also tells us the energy of a resonant system, such as a standing wave can only take on the discrete or quantized values associated with its fundamental or a harmonic of its fundamental frequency. This means a particle would occupy an extended volume of space defined by the wavelength of its standing wave.

    Putting it another way what defines the fact that a particle appears where it does is NOT determined by the probabilities associated with the wave function but a deterministic interaction of an electromagnetic wave with the physical properties of space-time.

    (NOTE We will use a particles position to make the connection between the probabilities of Quantum mechanics and the determinism of Relativity but the same logic will apply to all conjugate pairs.)

    However, the probabilistic interpretation of the wave function is defines its reality because it use a mathematical point to represent a position of a particle which it randomly places with respect to the center of a particle. Therefore, the randomness of where that point is with respect to a particle’s center will result in its position, when observed to be randomly distributed in space. This means one must define its position in terms of probabilities to average the deviations that are caused by that random placement.

    Yet as was mentioned earlier Reality defines the position of particles in terms of where the point defining their center of mass is located. Therefore, because similar to quantum mechanics Relativity cannot precisely determine where that point is located it would also have to define their exact position in terms of probabilities.

    However, the large number of particles in objects such as a moon or planet would result in averaging out the deviation of the position of each their individual particles it appears to be deterministic.

    But the same logic would apply to a quantum environment because its probabilistic deviations of a particle’s position would average out making the position of large objects such as the mom and planets appear to be deterministic.

    This suggests the reason our universe appears indeterminate on a quantum scale while being deterministic on a macroscopic level is because similar to Relativity those deviations would be averaged out by the large number of particles in objects like the moon and planets.

    As was mentioned earlier one can define reality as the world or the state of things as they actually exist, as opposed to an idealistic or notional idea of them.

    Therefore, as was shown above one can define the Reality of the probabilistic world of quantum mechanics and the deterministic one of Relativity by assuming actual existence of an electromagnetic wave whose evolution can be defined by the notional idea of the wave function.

    Please follow and like us:

    Source link