Tag: Disinformation

  • How to combat misinformation and disinformation in the classroom

    How to combat misinformation and disinformation in the classroom

    This audio is auto-generated. Please let us know if you have feedback.

    Dive Brief:

    • As students spend more time on digital media, and as misinformation — and disinformation — continues to infiltrate those platforms, how should educators react? Experts say they should take a meta approach that educates students in digital literacy and gently but persistently questions conspiracies rather than arguing. 
    • Media literacy needs to be infused throughout the curriculum because students are consuming media related to every subject, said Eisha Buch, head of teaching and learning at Common Sense Media. 
    • While facts and information debunk those theories, students who believe them shouldn’t be put on the defensive. “Give students the tools to reflect as much as they can, with open-ended, non-judgmental questions,” said Noah Rauch, senior vice president of education and public programs for the 9/11 Memorial & Museum. “Get students to think about their own thinking.”

    Dive Insight:

    Media literacy is as critical as teaching students to read, write and do math, said Buch. The urgency for the subject comes with the rise of artificial intelligence and the ongoing circulation of conspiracy theories about everything from 9/11 to the COVID-19 pandemic. 

    “Debunking conspiracy theories is part of media literacy,” but it’s not the place to start, said Buch. “It’s about understanding the media ecosystem that allows such content to flourish.”

    Many museums offer resources for educators, including the 9/11 Memorial & Museum, which has a variety of programs and resources for schools and teachers to talk about the tragedy. These include physical and virtual field trips, as well as professional development opportunities for educators at the museum, online and at conferences around the country, Rauch said.

    Some of this programming focuses both on debunking particular conspiracy theories and on critical thinking, often starting with the Occam’s razor principle. “The simplest explanation is often the correct one,” Rauch said, adding, “If you belittle an idea, it’s going to force people into a corner.”

    Such techniques are certainly not limited to combating misinformation and disinformation about 9/11, Rauch said. 

    Educators should ask students, “What’s the goal of the conspiracy? How many people would need to be involved?” he said. “Is there any piece of evidence we can give you that might change your mind?” 

    For educators to succeed in these meta-strategies, they first need to understand the attention economy — in which companies compete for individuals’ attention as a commodity — and how it works, and then instill that understanding in students, Buch said.

    This would include asking who profits off of the content, why the content exists, and who it serves. Then, educators should ask students to decide whether they want to engage with a given post or news story, or decide to ignore it, Buch said. If a student decides to engage, they should think critically and stay curious, Buch said.

    Especially in an AI-fueled world, students should not “go super-deep on one particular post — go wide, on different sources,” she said. “This is, admittedly, the hardest piece, especially with AI: You can find multiple sources that still give you the wrong information.”

    Students should also be urged to check their emotional responses, Buch said. 

    “If something makes you furious or excited, it’s worthwhile to pause and think,” she said. “Strong emotions work against your ability to think critically. Disinformation is trying to bypass the rational frame.” 

    Educators should continuously practice these measures in class — even, Buch said.  with reliable sources of information. 

    “Everything you see online is tied to an incentive model in some way,” Buch added. “You’re not trying to teach kids to be cynical, but you do want them to think critically.”

    Source link

  • Disinformation and the decline of democracy

    Disinformation and the decline of democracy

    The unprecedented mob assault on the U.S. Capitol on January 6 represents perhaps the most stunning collision yet between the world of online disinformation and reality.

    The supporters of U.S. President Donald Trump who broke into Congress did so in the belief that the U.S. election was stolen from them after weeks of consuming unproven narratives about “ballot dumps,” manipulated voting machines and Democrat big-city corruption. Some — including the woman who was shot dead — were driven by the discredited QAnon conspiracy theory that represents Democratic Party elites as a pedophile ring and Trump as the savior.

    It’s tempting to hope that disinformation and its corrosive effects on democracy may have reached a high-water mark with the events of January 6 and the end of Trump’s presidency. But trends in technology and society’s increasing separation into social media echo chambers suggest that worse may be to come.

    Imagine for a moment if video of the Capitol riot had been manipulated to replace the faces of Trump supporters with those of known protestors for antifa, a left-wing, anti-fascist and anti-racist political movement. This would have bolstered the unproven story that has emerged about a “false flag” operation. Or imagine if thousands of different stories written by artificial intelligence software and pedaling that version of events had flooded social media and been picked up by news organizations in the hours after the assault.

    That technology not only exists. It’s getting more sophisticated and easier to access by the day.

    Trust in democracy is eroding.

    Deepfake, or synthetic, videos are starting to seep from pornography — where they’ve mostly been concentrated — into the world of politics. A deepfake of former President Barack Obama using an expletive to describe Trump has garnered over eight million views on YouTube since it was released in 2018.

    Most anyone familiar with Obama’s appearance and speaking style can tell there’s something amiss with that video. But two years is an eternity in AI-driven technology and many experts believe it will soon be impossible for the human eye and ear to spot the best deepfakes.

    A deepfake specialist was hailed early last year for using freely available software to “de-age” Robert DeNiro and Joe Pesci in the movie “The Irishman,” producing a result that many critics considered superior to the work of the visual-effects supervisor in the actual film.

    In recent years, the sense of shared, objective reality and trust in institutions have already come under strain as social media bubbles hasten the spread of fake news and conspiracy theories. The worry is that deepfakes and other AI-generated content will supercharge this trend in coming years.

    “This is disastrous to any liberal democratic model because in a world where anything can be faked, everyone becomes a target,” Nina Schick, the author of “Deepfakes — The Coming Infopocalypse,” told U.S. author Sam Harris in a recent podcast.

    “But even more than that, if anything can be faked … everything can also be denied. So the very basis of what is reality starts to become corroded.”

    Governments must do more to combat disinformation.

    Illustrating her point is reaction to Trump’s video statement released a day after the storming of Congress. While some of his followers online saw it as a betrayal, others reassured themselves by saying it was a deepfake.

    On the text side, the advent of GPT-3 — an AI program that can produce articles indistinguishable from those written by humans — has potentially powerful implications for disinformation. Writing bots could be programmed to produce fake articles or spew political and racial hatred at a volume that could overwhelm text based on facts and moderation.

    Society has been grappling with written fake news for years and photographs have long been easily manipulated through software. But convincingly faked videos and AI-generated stories seem to many to represent a deeper, more viral threat to reality-based discourse.

    It’s clear that there’s no silver-bullet solution to the disinformation problem. Social media platforms like Facebook have a major role to play and are developing their own AI technology to better detect fake content. While fakers are likely to keep evolving to stay ahead, stricter policing and quicker action by online platforms can at least limit the impact of false videos and stories.

    Governments are coming under pressure to push Big Tech into taking a harder line against fake news, including through regulation. Authorities can devote more funding to digital media literacy programs in schools and elsewhere to help individuals become more alert and proficient in identifying suspect content.

    When it comes down to it, the real power of fake news hinges on those who believe it and spread it.


    Questions to consider:

    1. How can technology be used to spread fake news?

    2. Why is disinformation potentially harmful to democracy?

    3. How do you think the rise of AI technology will affect the type of information people consume?

    Source link