Tag: Deepfakes

  • FIRE opposes Virginia’s proposed regulation of candidate deepfakes

    FIRE opposes Virginia’s proposed regulation of candidate deepfakes

    Last year, California passed restrictions on sharing AI-generated deepfakes of candidates, which a court then promptly blocked for violating the First Amendment. Virginia now looks to be going down a similar road with a new bill to penalize people for merely sharing certain AI-generated media of political candidates.

    This legislation, which has been in SB 775 and HB 2479, would make it illegal to share artificially generated, realistic-looking images, video, or audio of a candidate to “influence an election,” if the person knew or should have known that the content is “deceptive or misleading.” There is a civil penalty or, if the sharing occurred within 90 days before an election, up to one year in jail. Only if a person adds a conspicuous disclaimer to the media can they avoid these penalties.

    The practical effects of this ban are alarming. Say a person in Virginia encounters a deepfaked viral video of a candidate on Facebook within 90 days of an election. They know it’s not a real image of the candidate, but they think it’s amusing and captures a message they want to share with other Virginians. It doesn’t have a disclaimer, but the person doesn’t know it’s supposed to, and doesn’t know how to edit the video anyway. They decide to repost it to their feed.

    That person could now face jailtime.

    The ban would also impact the media. Say a journalist shares a deepfake that is directly relevant to an important news story. The candidate depicted decides that the journalist didn’t adequately acknowledge “in a manner that can easily be heard and understood by the average listener or viewer, that there are questions about the authenticity of the media,” as the bill requires. That candidate could sue to block further sharing of the news story.

    The First Amendment safeguards expressive tools like AI, allowing them to enhance our ability to communicate with one another without facing undue government restrictions.

    These illustrate the startling breadth of SB 775/HB 2479’s regulation of core political speech, which makes it unlikely to survive judicial scrutiny. Laws targeting core political speech have serious difficulty passing constitutional muster, even when they involve false or misleading speech. That’s because there’s no general First Amendment exception for misinformation, disinformation, or other false speech. That’s for good reason: A general exception would be easily abused to suppress dissent and criticism.

    Wave of state-level AI bills raise First Amendment problems

    News

    There’s no ‘artificial intelligence’ exception to the First Amendment.


    Read More

    There are narrow, well-defined categories of speech not protected by the First Amendment — such as fraud and defamation — that Virginia can and does already restrict. But SB 775/HB 2479 is not limited to fraudulent or defamatory speech.

    For laws that burden protected speech related to elections, it is a very high bar to pass constitutional muster. This bill doesn’t meet that bar. It restricts far more speech than necessary to prevent voters from being deceived in ways that would have any effect on an election, and there are other ways to address deepfakes that would burden much less speech. For one, other speakers or candidates can (and do) simply point them out, eroding their potential to deceive.

    The First Amendment safeguards expressive tools like AI, allowing them to enhance our ability to communicate with one another without facing undue government restrictions.

    We urge the Virginia General Assembly to oppose this legislation. If it gets to his desk, Virginia Gov. Glenn Youngkin should veto.

    Source link

  • The TRAP Test to Spot AI Deepfakes and How to NOT Be Deceived – Sovorel

    The TRAP Test to Spot AI Deepfakes and How to NOT Be Deceived – Sovorel

    Everyone needs to develop AI Literacy skills in order to use AI properly and increase effectiveness/efficiencies, yet another vital part of AI Literacy is to develop critical thinking and awareness skills to avoid being deceived by synthetic data such as AI created deepfakes. Cyber Magazine, an international news source, expressed the importance of this issue by stating:

    Deepfakes are inevitably becoming more advanced, which is making it harder to spot and stop those that are used with bad intentions. As access to synthetic media technology increases, deepfakes can be used to damage reputations, fabricate evidence and undermine trust.

    With deepfake technology increasingly being used for mal-intent, businesses would do well to ensure that their workforce is fully trained and aware of the risks associated with AI-generated content. (Jackson, 2023)

    To address this important issue I have created the TRAP test:

    T: Think Critically. All of us must now have a critical awareness and mindset when using any type of digital media since all digital media can now be easily manipulated and created with generative AI. When encountering any digital text, images, audio, or video we need to realize that it might not be real and it might be trying to manipulate our perception. We need to use the TRAP test to ask further questions to help ensure that we are getting the object truth.

    R: Realistic/Reliable/Reputable: When using digital media or viewing a video, we need to ask ourselves the question “does this seem real and is it likely to occur?” We must also consider whether or not the source of the information is reliable and if it is coming from a reputable source. Is it from an official source, a well known news source, a government agency, or established organization? Always check the source.

    A: Accurate/Authority: Check to see if all parts of the digital media are accurate. As an example, if watching a video, are all parts accurate and consistent. Are there any issues with the eyes, the background, or the light sources? Is it similar and consistent with other videos, images, or text? Additionally, has the media been released or authenticated by an authority? These questions must be verified and answered to help ensure validity and accuracy.

    P: Purpose/Propaganda: When reviewing any digital media we must ask ourselves, “what is the purpose of this media? If the answer is that they are trying to get your money or to sway your vote in an election then you should be extra sure that the information is completely truthful. Ask yourself if the digital information presented is simply just propaganda, full of bias and misleading. Be sure to ask if there is more to the story that you are reading/watching.

    Using the TRAP test and asking these questions will help to prevent everyone from being scammed and/or deceived. Students, faculty, and everyone must develop AI Literacy skills like these.

    All aspect of this defined AI Literacy are important (Anders, 2023), but Awareness and Critical Thinking are key in developing the proper mindset to use the TRAP test. This is something that must be continually developed and used in order to ensure its greatest effectiveness.

    All of us in academia must work to ensure that student and everyone else develop these skills to use AI in the right way and be able to properly spot AI deepfakes and avoid being deceived. Please share this information with colleagues, students, family, and friend; especially the elderly who can at times be even more vulnerable. Together we make a major difference and improve our new world filled with AI.

    A video describing the TRAP test is also available on the Sovorel Educational YouTube channel:

    “How to Spot a Deepfake and NOT Be Deceived” (Anders, 2024)

    Please share your thoughts and comment below:

    References

    Anders, B. (2023). The AI literacy imperative: Empowering instructors & students. Sovorel Publishing.

    Jackson, A. (2023, October 13). The rising tide of deepfakes as AI growth cause concern. Cyber Magazin, Technology: AI. https://cybermagazine.com/technology-and-ai/the-rising-tide-of-deepfakes-as-ai-growth-cause-concern

    Source link