Category: fact-checking

  • Can you believe it? | News Decoder

    Can you believe it? | News Decoder

    Can you tell the difference between a rumor and fact?

    Let’s start with gossip. That’s where you talk or chat with people about other people. We do this all the time, right? Something becomes a rumor when you or someone else learn something specific through all the chit chat and then pass it on, through chats with other people or through social media.

    A rumor can be about anyone and anything. The more nasty or naughty the tidbit, the greater the chance people will pass it on. When enough people spread it, it becomes viral. That’s where it seems to take on a life of its own.

    A fact is something that can be proven or disproven. The thing is, both fact and rumor can be accepted as a sort of truth. In the classic song “The Boxer,” the American musician Paul Simon once sang, “a man hears what he wants to hear and disregards the rest.”

    Once a piece of information has gone viral, whether fact or fiction, it is difficult to convince people who have accepted it that it isn’t true.

    Fact and fiction

    That’s why it is important — if you care about truth, that is — to determine whether or not a rumor is based on fact before you pass it on. That’s what ethical journalists do. Reporting is about finding evidence that can show whether something is true. Without evidence, journalists shouldn’t report something, or if they do they must make sure their readers or listeners understand that the information is based on speculation or unproven rumor.

    There are two types of evidence they will look for: direct evidence and indirect evidence. The first is information you get first-hand — you experience or observe something yourself. All else is indirect. Rumor is third-hand: someone heard something from someone who heard it from the person who experienced it.

    Most times you don’t know how many “hands” information has been through before it comes to you. Understand that in general, stories change every time they pass from one person to another.

    If you don’t want to become a source of misinformation, then before you tell a story or pass on some piece of information, ask yourself these questions:

    → How do I know it?

    → Where did I get that information and do I know where that person or source got it?

    → Can I trace the information back to the original source?

    → What don’t I know about this?

    Original and secondary sources

    An original source might be yourself, if you were there when something happened. It might be a story told you by someone who was there when something happened — an eyewitness. It might be a report or study authored by someone or a group of people who gathered the data themselves.

    Keep in mind though, that people see and experience things differently and two people who are eyewitness to the same event might have remarkably different memories of that event. How they tell a story often depends on their perspective and that often depends on how they relate to the people involved.

    If you grow up with dogs, then when you see a big dog barking you might interpret that as the dog wants to play. But if you have been bitten by a dog, then a big dog barking seems threatening. Same dog, same circumstance, but contrasting perspectives based on your previous experience.

    Pretty much everything else is second-hand: A report that gets its information from data collected elsewhere or from a study done by other researchers; a story told to you by someone who spoke to the person who experienced it.

    But how do videos come into play? You see a video taken by someone else. That’s second-hand. But don’t you see what the person who took the video sees? Isn’t that almost the same as being an eyewitness?

    Not really. Consider this. Someone tells you about an event. You say: “How do you know that happened?” They say: “I was there. I saw it.” That’s pretty convincing. Now, if they say: “I saw the video.” That’s isn’t as convincing. Why? Because you know that the video might not have shown all of what happened. It might have left out something significant. It might even have been edited or doctored in some way.

    Is there evidence?

    Alone, any one source of information might not be convincing, even eyewitness testimony. That’s why when ethical reporters are making accusations in a story or on a podcast, they provide multiple, different types of evidence — a story from an eyewitness, bolstered by an email sent to the person, along with a video, and data from a report.

    It’s kind of like those scenes in murder mysteries where someone has to provide a solid alibi. They can say they were with their spouse, but do you believe the spouse?

    If they were caught on CCTV, that’s pretty convincing. Oh, there’s that parking ticket they got when they were at the movies. And in their coat pocket is the receipt for the popcorn and soda they bought with a date and time on it.

    Now, you don’t have to provide all that evidence every time you pass on a story you heard or read. If that were a requirement, conversations would turn really dull. We are all storytellers and we are geared to entertain. That means that when we tell a story we want to make it a good one. We exaggerate a little. We emphasize some parts and not others.

    The goal here isn’t to take that fun away. But we do have a worldwide problem of misinformation and disinformation.

    Do you want to be part of that problem or part of a solution? If the latter, all you have to do is this: Recognize what you actually know and separate it in your head from what you heard or saw second hand (from a video or photo or documentary) and let people know where you got that information so they can know.

    Don’t pass on information as true when it might not be true or if it is only partially true. Don’t pretend to be more authoritative than you are.

    And perhaps most important: What you don’t know might be as important as what you do know.


    Questions to consider:

    1. What is an example of an original source?

    2. Why should you not totally trust information from a video?

    3. Can you think of a a time when your memory of an event differed from that of someone else who was there?

     

    Source link

  • Can the information you share be trusted?

    Can the information you share be trusted?

    “If you see that a person has lied in the past you should carefully consider whether it is a good idea to trust them,” Jonas said. 

    3. Find other sources that seem to be reporting the same thing.

    “Sometimes you will find that different sources interpret the same event very differently,” Jonas said. “Think about which sources you should trust more.”

    Information in research articles, journalistic publications or academic experts and institutions are generally more reliable than blog contributors or social media posts, Jonas said. 

    Be a bit skeptical, too, she said, when a publication or podcast or post seems to mix information with emotion and see if you can separate out factual reporting with opinion.

    Incorporating this healthy skepticism and adopting a system for verifying information will help you build a reputation for credibility and reliability. This is useful not just in your reporting, Jonas said, but in your daily life, as well. 


     

    Questions to consider:

    1. What is meant by a system of verification?

    2. Why should you check for information about the author of an article or post you read?

    3. How can a healthy skepticism be useful in your daily life?


     

    Source link

  • Is freedom of speech the same as freedom to lie?

    Is freedom of speech the same as freedom to lie?

    Meta will stop checking falsehoods. Does that mean more free speech or a free-for-all?

    “First, we’re going to get rid of fact-checkers,” Mark Zuckerberg, the founder of Meta, said in a video statement early this January. “Second, we’re going to simplify our content policies and get rid of a bunch of restrictions on topics like immigration and gender that are just out of touch with mainstream discourse.”

    This statement marks another turn in the company’s policies in handling disinformation and hate speech on their widely used platforms Facebook, Instagram and Threads. 

    Meta built up its moderation capabilities and started its fact-checking program after Russia’s attempts to use Facebook to influence American voters in 2016 and after it was partially blamed by various human rights groups like Amnesty International for allowing the spread of hate speech leading to genocide in Myanmar. 

    Until now, according to Meta, about 15 thousand people review content on the platform in 70 languages to see if it is in line with the company’s community standards.

    Adding information, not deleting

    For other content, the company involves professional fact-checking organizations with journalists around the world. They independently identify and research viral posts that might contain false information. 

    Fact-checkers, like any other journalists, publish their findings in articles. They compare what is claimed in the post with statistics, research findings and expert commentary or they analyze if the media in the post are manipulated or AI generated. 

    But fact-checkers have a privilege that other journalists don’t – they can add information to the posts they find false or out of context on Meta platforms. It appears in the form of a warning label. The user can then read the full article by fact-checkers to see the reasons or close the warning and interact with the post.

    Fact-checkers can’t take any further action like removing or demoting content or accounts, according to Meta. That is up to the company. 

    However, Meta now likens the fact-checking program to censorship. Zuckerberg also argued for the end of the program saying that the fact-checkers “have just been too politically biased and have destroyed more trust than they’ve created.”

    Can untrained people regulate the Web?

    For now, the fact-checking program will be discontinued in the United States. Meta plans to rely instead on regular users to evaluate content under a new program it calls “Community Notes.” The company promises to improve it over the course of the year before expanding it to other countries.

    In a way, Meta walking back on their commitments to fight disinformation wasn’t a surprise, said Carlos Hernández- Echevarría, the associate director of the Spanish fact-checking outlet Maldita and a deputy member of the governance body that assesses and approves European fact-checking organizations before they can work with Meta called the European Fact-Checking Standards Network. 

    Zuckerberg had previously said that the company was unfairly blamed for societal ills and that he was done apologizing. But fact-checking partners weren’t warned ahead of the announcement of the plans to scrap the program, Hernández- Echevarría said.

    It bothers him that Meta connects fact-checking to censorship.

    “It’s actually very frustrating to see the Meta CEO talking about censorship when fact-checkers never had the ability and never wanted the ability to remove any content,” Hernández-Echevarría said. He argues that instead, fact-checkers contribute to speech by adding more information. 

    Are fact-checkers biased?

    Hernández-Echevarría also pushes back against the accusation that fact-checkers are biased. He said that mistakes do occur, but the organizations and people doing the work get carefully vetted and the criteria can be seen in the networks’ Code of Standards

    For example, fact-checkers must publish their methodology for choosing and evaluating information. Fact-checkers also can’t endorse any political parties or have any agreements with them. They also have to provide proof of who they are owned by as well as publicly disclose information about their employees and funding.

    Meta’s own data about Facebook, which they disclose to EU institutions, also shows that erroneous decisions to demote posts based on fact-checking labels occur much less often than when posts are demoted for other reasons — nudity, bullying, hate speech and violence, for example. 

    In the period from April to September last year, Meta received 172,550 complaints about the demotion of posts with fact-checking labels and, after having another look, reversed it for 5,440 posts — a little over 3%. 

    However, in all other categories combined, the demotion had to be reversed for 87% of those posts.

    The sharing of unverified information

    Research shows that the perception of the unequal treatment of different political groups might form because people on the political right publish more unreliable information.

    A paper published in the scientific magazine Nature says that conservative users indeed face penalties more often, but they also share more low-quality news. Researchers therefore argued that even if the policies contain no bias, there can be an asymmetry in how they are enforced on platforms.

    Meta is also making other changes. On 7 January, the company published a revised version of its hateful conduct policies. The platform now allows comparing women to household objects and “insulting language in the context of discussing political or religious topics, such as when discussing transgender rights, immigration, or homosexuality”. The revised policies also now permit “allegations of mental illness or abnormality when based on gender or sexual orientation”.

    LGBTQ+ advocacy group GLAAD called these changes alarming and extreme and said they will result in platforms becoming “unsafe landscapes filled with dangerous hate speech, violence, harassment, and misinformation”. 

    Journalists also report that the changes divided the employees of the company. The New York Times reported that as some upset employees posted on the internal message board, human resources workers quickly removed the posts saying they broke the rules of a company policy on community engagement.

    Political pressure

    In a statement published on her social media channels. Angie Drobnic Holan, the director of the International Fact-Checking Network, which represents fact-checkers in the United States, linked Meta’s decision to political pressure.

    “It’s unfortunate that this decision comes in the wake of extreme political pressure from a new administration and its supporters,” Holan said. “Fact-checkers have not been biased in their work. That attack line comes from those who feel they should be able to exaggerate and lie without rebuttal or contradiction.”

    In his book “Save America” published in August 2024, Donald Trump whose term as U.S. President begins today, accused Zuckerberg of plotting against him. “We are watching him closely, and if he does anything illegal this time he will spend the rest of his life in prison,” he wrote. 

    Now, with the changes Zuckerberg announced, Trump is praising Meta and said they’ve come a long way. When asked during a press conference 7 January if he thought Zuckerberg was responding to Trump’s threats, Trump replied, “Probably.”

    After Meta’s announcement, the science magazine Nature published a review of research with comments from experts on the effectiveness of fact-checking. For example, a study in 2019 analyzing 30 research papers covering 20 thousand participants found an influence on beliefs but the effects were weakened by participants’ preexisting beliefs, ideology and knowledge. 

    Sander van der Linden, a social psychologist at the University of Cambridge told Nature that ideally, people wouldn’t form misperceptions in the first place but “if we have to work with the fact that people are already exposed, then reducing it is almost as good as it as it’s going to get”. 

    Hernández-Echevarría said that although the loss of Meta’s funding will be a hard hit to some organizations in the fact-checking community, it won’t end the movement. He said, “They are going to be here, fighting disinformation. No matter what, they will find a way to do it. They will find support. They will do it because their central mission is to fight disinformation.”


    Questions to consider:

    • What is now allowed under Meta’s new rules for posts that wasn’t previously?

    • How is fact-checking not the same as censorship?

    • When you read social media posts, do you care if the poster is telling the truth?


     

    Source link