Tag: Deepfakes

  • FIRE opposes Virginia’s proposed regulation of candidate deepfakes

    FIRE opposes Virginia’s proposed regulation of candidate deepfakes

    Last year, California passed restrictions on sharing AI-generated deepfakes of candidates, which a court then promptly blocked for violating the First Amendment. Virginia now looks to be going down a similar road with a new bill to penalize people for merely sharing certain AI-generated media of political candidates.

    This legislation, which has been in SB 775 and HB 2479, would make it illegal to share artificially generated, realistic-looking images, video, or audio of a candidate to “influence an election,” if the person knew or should have known that the content is “deceptive or misleading.” There is a civil penalty or, if the sharing occurred within 90 days before an election, up to one year in jail. Only if a person adds a conspicuous disclaimer to the media can they avoid these penalties.

    The practical effects of this ban are alarming. Say a person in Virginia encounters a deepfaked viral video of a candidate on Facebook within 90 days of an election. They know it’s not a real image of the candidate, but they think it’s amusing and captures a message they want to share with other Virginians. It doesn’t have a disclaimer, but the person doesn’t know it’s supposed to, and doesn’t know how to edit the video anyway. They decide to repost it to their feed.

    That person could now face jailtime.

    The ban would also impact the media. Say a journalist shares a deepfake that is directly relevant to an important news story. The candidate depicted decides that the journalist didn’t adequately acknowledge “in a manner that can easily be heard and understood by the average listener or viewer, that there are questions about the authenticity of the media,” as the bill requires. That candidate could sue to block further sharing of the news story.

    The First Amendment safeguards expressive tools like AI, allowing them to enhance our ability to communicate with one another without facing undue government restrictions.

    These illustrate the startling breadth of SB 775/HB 2479’s regulation of core political speech, which makes it unlikely to survive judicial scrutiny. Laws targeting core political speech have serious difficulty passing constitutional muster, even when they involve false or misleading speech. That’s because there’s no general First Amendment exception for misinformation, disinformation, or other false speech. That’s for good reason: A general exception would be easily abused to suppress dissent and criticism.

    Wave of state-level AI bills raise First Amendment problems

    News

    There’s no ‘artificial intelligence’ exception to the First Amendment.


    Read More

    There are narrow, well-defined categories of speech not protected by the First Amendment — such as fraud and defamation — that Virginia can and does already restrict. But SB 775/HB 2479 is not limited to fraudulent or defamatory speech.

    For laws that burden protected speech related to elections, it is a very high bar to pass constitutional muster. This bill doesn’t meet that bar. It restricts far more speech than necessary to prevent voters from being deceived in ways that would have any effect on an election, and there are other ways to address deepfakes that would burden much less speech. For one, other speakers or candidates can (and do) simply point them out, eroding their potential to deceive.

    The First Amendment safeguards expressive tools like AI, allowing them to enhance our ability to communicate with one another without facing undue government restrictions.

    We urge the Virginia General Assembly to oppose this legislation. If it gets to his desk, Virginia Gov. Glenn Youngkin should veto.

    Source link

  • New Title IX Rule Defines Deepfakes as Sexual Harassment

    New Title IX Rule Defines Deepfakes as Sexual Harassment

    On April 19, 2024, the U.S. Department of Education released updated Title IX Regulations that clarified schools’ ability to raise incidents of harassment using non-consensual, sexually explicit deepfakes through Title IX action. Title IX is a federal law that bars sex discrimination in education programs and applies to all public and private elementary and secondary schools, school districts, colleges, and universities that receive federal funding (hereinafter “schools”) and includes governance over schools’ responses to complaints of sexual harassment or assault.

    What are deepfakes? 

    “Deepfakes” are “multimedia that has either been synthetically created or manipulated using some form of machine or deep learning (artificial intelligence) technology.” Sexually explicit deepfake images can be generated using methods like face-swapping, replacing one person’s face with another’s face, or “undressing” a clothed image to look supposedly nude. Deepfakes and the artificial intelligence technologies that generate them are increasingly sophisticated, harder to detect, and widely accessible by anyone with a computer or smartphone app for little to no cost.

    In the past two years, numerous incidents have occurred in schools with students creating deepfake media of other students or teachers, as well as school staff, creating them of fellow staff for reasons ranging from impersonating teachers to portray offensive messages to sharing sexually explicit images and videos. Educational institutions have been grappling with how to react to advances in AI, and these deepfake incidents have sparked additional concern about how to protect students, staff, and administrators, while also needing to know how to address incidents when they occur. This blog discusses how the recently updated Title IX Rule applies to deepfake incidents and provides four tips for how schools can prepare to respond.

    How might Title IX apply to deepfake incidents in schools? 

    The new Title IX Rule updates the definition of “sexual harassment” to include “the nonconsensual distribution of intimate images” including authentic images and those altered or generated by AI. Existing Title IX protections against harassment apply to actions connected to any school-related programs or activities regardless of whether the harassment occurs on or off campus. That is, even if deepfakes are disseminated outside of school, Title IX requires schools to address off-campus behavior that creates a “hostile environment” in the school. Under the new rule, behavior qualifies as sexual harassment when it is objectively and subjectively offensive and so “severe or pervasive” that it limits or denies a person’s ability to “participate in or benefit from the recipient’s education program or activity.” The previous definition of sexual harassment was more limited by requiring that behavior be “so severe, pervasive, and objectively offensive.” Determining whether behavior has created a hostile environment is fact-specific and Title IX stipulates the following considerations: 

    “(i) The degree to which the conduct affected the complainant’s ability to access the recipient’s education program or activity; (ii) The type, frequency, and duration of the conduct; (iii) The parties’ ages, roles within the recipient’s education program or activity, previous interactions, and other factors about each party that may be relevant to evaluating the effects of the conduct; (iv) The location of the conduct and the context in which the conduct occurred; and (v) Other sex-based harassment in the recipient’s education program or activity.”

    Additionally, the updated Title IX Rule modified investigation standards. Now, higher education institutions will have a lower bar for adjudicating guilt that is a “preponderance of the evidence” standard rather than the previous “clear and convincing evidence” standard. Universities will still be able to use the higher standard if it has been used in cases with factually similar circumstances. Primary and secondary schools will continue to have the choice of an informal resolution of complaints if “available and appropriate.”    

    Four Proactive Practices for Educational Institutions

        • Update policies to include deepfakes. Educational institutions should routinely review their policies and procedures and update them as needed to ensure their effectiveness in addressing image-based sexual harassment. These policies should convey how to handle instances of deepfakes created by and/or of students, teachers, or other staff in and outside of school and whether policies differ based on the method of distribution (e.g., sharing on an external site like Instagram versus posting on a school forum, in person, etc.). Sexually explicit deepfakes may be created or distributed using online tools outside of school or using products the school has procured. School districts should evaluate procured products that could be used to create or distribute deepfakes and review agreements with those third-party vendors for compatibility with the districts’ own policies on incident response. Lastly, policies should include defined terms that aren’t overbroad (like banning all “AI”) or underinclusive (like defining “deepfakes” as only still images). 
        • Ensure that Title IX procedures are properly implemented. Schools must recognize that Title IX legal obligations and student protections may apply to sexually explicit deepfake incidents. Title IX requires that schools conduct a “prompt, impartial, and thorough investigation” of sexual harassment complaints and take appropriate steps toward resolution. Title IX investigation procedures and policies must be updated in accordance with the Rule’s new “preponderance of the evidence” standard. Legal obligations can include keeping the identity of complainants confidential, informing complainants about available resources, interviewing complainants in an inappropriate manner, and appropriately pursuing a formal hearing when requested by complainants. School leaders should incorporate the definition and handling of deepfake incidents into Title IX policies and ensure that procedures are in place for staff to respond promptly and effectively.
        • Instruction and training for school staff.  Schools are required to communicate Title IX policies to all students and staff which could include highlighting that non-consensual, sexually explicit deepfakes may qualify as Title IX sexual harassment. Institutions should consider staff training to include responsible technology use, ethical uses of AI (in and out of the school), how it impacts others, and what repercussions exist. Districts can share resources to help inform educators of ways to identify deepfake content (like those from the Department of Homeland Security, MIT, and AI for Education).  
        • Education leaders should ensure that staff are properly trained on requirements under the Family Educational Rights and Privacy Act (FERPA) and how it interacts with Title IX complaints. Title IX investigations typically involve maintaining information that directly relates to a student and is personally identifiable, thus creating a FERPA-covered education record and triggering additional privacy protections. Deepfake incidents have been reported to law enforcement, which victims may do of their own accord. However, it is important to inform staff of when schools can legally disclose information to law enforcement, such as with parental consent, a court order or subpoena, or under a FERPA exception. Title IX requires that the identities of a sexual harassment complainant and the alleged perpetrator are kept confidential unless the disclosure is FERPA permitted, it is required by law, or it is necessary to carry out Title IX purposes. See FPF’s guide for more on Law Enforcement Access to Student Records
        • Instruction and training for students. Educational institutions must inform students and/or parents of their Title IX policies and should consider educating students and parents on the ethical, and legal, use of AI. This instruction could take many forms, but it should include (1) the appropriate uses of AI in and out of school, (2) the inappropriate uses of AI that would lead to disciplinary action, (3) the process of disciplinary action, and (4) the negative impact that unethical or illegal use of AI could have on the victim, creator, and the community. Communicating to students the seriousness of misusing AI could help prevent further incidents.

    The updated Title IX rule clarified that schools should evaluate if a sexually explicit deepfake incident qualifies as an issue of sexual harassment. School leaders should also understand that in addition to Title IX, FERPA, state-specific laws, and privacy policies that apply to the sharing of student information may apply to incidents, even when that information is AI-generated. States are increasingly enacting non-education-specific laws to combat the generation or dissemination of sexually explicit deepfakes. For example, Washington State enacted House Bill 1999 this year which expanded the criminal offenses for non-consensual creation or sharing of sexually explicit, fabricated images of an identifiable minor, similar to laws in Virginia, and New York. Educational institutions should stay informed on applicable statutes and be aware that the legal landscape is quickly evolving to combat deepfake incidents. 

    What’s Next? 

    The updated Title IX Rule became effective on August 1, 2024, and applies to any complaints of alleged conduct that occurs on or after that date. As of the effective date, 26 states have filed suits against the law and have been granted injunctions blocking its enforcement. Pushback from the states and other organizations largely stems from the updated rule’s expansion of sex discrimination to include “gender identity,” and it is not yet clear how these legal challenges will affect the future of the updated rule.

    Source link

  • The TRAP Test to Spot AI Deepfakes and How to NOT Be Deceived – Sovorel

    The TRAP Test to Spot AI Deepfakes and How to NOT Be Deceived – Sovorel

    Everyone needs to develop AI Literacy skills in order to use AI properly and increase effectiveness/efficiencies, yet another vital part of AI Literacy is to develop critical thinking and awareness skills to avoid being deceived by synthetic data such as AI created deepfakes. Cyber Magazine, an international news source, expressed the importance of this issue by stating:

    Deepfakes are inevitably becoming more advanced, which is making it harder to spot and stop those that are used with bad intentions. As access to synthetic media technology increases, deepfakes can be used to damage reputations, fabricate evidence and undermine trust.

    With deepfake technology increasingly being used for mal-intent, businesses would do well to ensure that their workforce is fully trained and aware of the risks associated with AI-generated content. (Jackson, 2023)

    To address this important issue I have created the TRAP test:

    T: Think Critically. All of us must now have a critical awareness and mindset when using any type of digital media since all digital media can now be easily manipulated and created with generative AI. When encountering any digital text, images, audio, or video we need to realize that it might not be real and it might be trying to manipulate our perception. We need to use the TRAP test to ask further questions to help ensure that we are getting the object truth.

    R: Realistic/Reliable/Reputable: When using digital media or viewing a video, we need to ask ourselves the question “does this seem real and is it likely to occur?” We must also consider whether or not the source of the information is reliable and if it is coming from a reputable source. Is it from an official source, a well known news source, a government agency, or established organization? Always check the source.

    A: Accurate/Authority: Check to see if all parts of the digital media are accurate. As an example, if watching a video, are all parts accurate and consistent. Are there any issues with the eyes, the background, or the light sources? Is it similar and consistent with other videos, images, or text? Additionally, has the media been released or authenticated by an authority? These questions must be verified and answered to help ensure validity and accuracy.

    P: Purpose/Propaganda: When reviewing any digital media we must ask ourselves, “what is the purpose of this media? If the answer is that they are trying to get your money or to sway your vote in an election then you should be extra sure that the information is completely truthful. Ask yourself if the digital information presented is simply just propaganda, full of bias and misleading. Be sure to ask if there is more to the story that you are reading/watching.

    Using the TRAP test and asking these questions will help to prevent everyone from being scammed and/or deceived. Students, faculty, and everyone must develop AI Literacy skills like these.

    All aspect of this defined AI Literacy are important (Anders, 2023), but Awareness and Critical Thinking are key in developing the proper mindset to use the TRAP test. This is something that must be continually developed and used in order to ensure its greatest effectiveness.

    All of us in academia must work to ensure that student and everyone else develop these skills to use AI in the right way and be able to properly spot AI deepfakes and avoid being deceived. Please share this information with colleagues, students, family, and friend; especially the elderly who can at times be even more vulnerable. Together we make a major difference and improve our new world filled with AI.

    A video describing the TRAP test is also available on the Sovorel Educational YouTube channel:

    “How to Spot a Deepfake and NOT Be Deceived” (Anders, 2024)

    Please share your thoughts and comment below:

    References

    Anders, B. (2023). The AI literacy imperative: Empowering instructors & students. Sovorel Publishing.

    Jackson, A. (2023, October 13). The rising tide of deepfakes as AI growth cause concern. Cyber Magazin, Technology: AI. https://cybermagazine.com/technology-and-ai/the-rising-tide-of-deepfakes-as-ai-growth-cause-concern

    Source link