Category: Technology

  • Is freedom of speech the same as freedom to lie?

    Is freedom of speech the same as freedom to lie?

    Meta will stop checking falsehoods. Does that mean more free speech or a free-for-all?

    “First, we’re going to get rid of fact-checkers,” Mark Zuckerberg, the founder of Meta, said in a video statement early this January. “Second, we’re going to simplify our content policies and get rid of a bunch of restrictions on topics like immigration and gender that are just out of touch with mainstream discourse.”

    This statement marks another turn in the company’s policies in handling disinformation and hate speech on their widely used platforms Facebook, Instagram and Threads. 

    Meta built up its moderation capabilities and started its fact-checking program after Russia’s attempts to use Facebook to influence American voters in 2016 and after it was partially blamed by various human rights groups like Amnesty International for allowing the spread of hate speech leading to genocide in Myanmar. 

    Until now, according to Meta, about 15 thousand people review content on the platform in 70 languages to see if it is in line with the company’s community standards.

    Adding information, not deleting

    For other content, the company involves professional fact-checking organizations with journalists around the world. They independently identify and research viral posts that might contain false information. 

    Fact-checkers, like any other journalists, publish their findings in articles. They compare what is claimed in the post with statistics, research findings and expert commentary or they analyze if the media in the post are manipulated or AI generated. 

    But fact-checkers have a privilege that other journalists don’t – they can add information to the posts they find false or out of context on Meta platforms. It appears in the form of a warning label. The user can then read the full article by fact-checkers to see the reasons or close the warning and interact with the post.

    Fact-checkers can’t take any further action like removing or demoting content or accounts, according to Meta. That is up to the company. 

    However, Meta now likens the fact-checking program to censorship. Zuckerberg also argued for the end of the program saying that the fact-checkers “have just been too politically biased and have destroyed more trust than they’ve created.”

    Can untrained people regulate the Web?

    For now, the fact-checking program will be discontinued in the United States. Meta plans to rely instead on regular users to evaluate content under a new program it calls “Community Notes.” The company promises to improve it over the course of the year before expanding it to other countries.

    In a way, Meta walking back on their commitments to fight disinformation wasn’t a surprise, said Carlos Hernández- Echevarría, the associate director of the Spanish fact-checking outlet Maldita and a deputy member of the governance body that assesses and approves European fact-checking organizations before they can work with Meta called the European Fact-Checking Standards Network. 

    Zuckerberg had previously said that the company was unfairly blamed for societal ills and that he was done apologizing. But fact-checking partners weren’t warned ahead of the announcement of the plans to scrap the program, Hernández- Echevarría said.

    It bothers him that Meta connects fact-checking to censorship.

    “It’s actually very frustrating to see the Meta CEO talking about censorship when fact-checkers never had the ability and never wanted the ability to remove any content,” Hernández-Echevarría said. He argues that instead, fact-checkers contribute to speech by adding more information. 

    Are fact-checkers biased?

    Hernández-Echevarría also pushes back against the accusation that fact-checkers are biased. He said that mistakes do occur, but the organizations and people doing the work get carefully vetted and the criteria can be seen in the networks’ Code of Standards

    For example, fact-checkers must publish their methodology for choosing and evaluating information. Fact-checkers also can’t endorse any political parties or have any agreements with them. They also have to provide proof of who they are owned by as well as publicly disclose information about their employees and funding.

    Meta’s own data about Facebook, which they disclose to EU institutions, also shows that erroneous decisions to demote posts based on fact-checking labels occur much less often than when posts are demoted for other reasons — nudity, bullying, hate speech and violence, for example. 

    In the period from April to September last year, Meta received 172,550 complaints about the demotion of posts with fact-checking labels and, after having another look, reversed it for 5,440 posts — a little over 3%. 

    However, in all other categories combined, the demotion had to be reversed for 87% of those posts.

    The sharing of unverified information

    Research shows that the perception of the unequal treatment of different political groups might form because people on the political right publish more unreliable information.

    A paper published in the scientific magazine Nature says that conservative users indeed face penalties more often, but they also share more low-quality news. Researchers therefore argued that even if the policies contain no bias, there can be an asymmetry in how they are enforced on platforms.

    Meta is also making other changes. On 7 January, the company published a revised version of its hateful conduct policies. The platform now allows comparing women to household objects and “insulting language in the context of discussing political or religious topics, such as when discussing transgender rights, immigration, or homosexuality”. The revised policies also now permit “allegations of mental illness or abnormality when based on gender or sexual orientation”.

    LGBTQ+ advocacy group GLAAD called these changes alarming and extreme and said they will result in platforms becoming “unsafe landscapes filled with dangerous hate speech, violence, harassment, and misinformation”. 

    Journalists also report that the changes divided the employees of the company. The New York Times reported that as some upset employees posted on the internal message board, human resources workers quickly removed the posts saying they broke the rules of a company policy on community engagement.

    Political pressure

    In a statement published on her social media channels. Angie Drobnic Holan, the director of the International Fact-Checking Network, which represents fact-checkers in the United States, linked Meta’s decision to political pressure.

    “It’s unfortunate that this decision comes in the wake of extreme political pressure from a new administration and its supporters,” Holan said. “Fact-checkers have not been biased in their work. That attack line comes from those who feel they should be able to exaggerate and lie without rebuttal or contradiction.”

    In his book “Save America” published in August 2024, Donald Trump whose term as U.S. President begins today, accused Zuckerberg of plotting against him. “We are watching him closely, and if he does anything illegal this time he will spend the rest of his life in prison,” he wrote. 

    Now, with the changes Zuckerberg announced, Trump is praising Meta and said they’ve come a long way. When asked during a press conference 7 January if he thought Zuckerberg was responding to Trump’s threats, Trump replied, “Probably.”

    After Meta’s announcement, the science magazine Nature published a review of research with comments from experts on the effectiveness of fact-checking. For example, a study in 2019 analyzing 30 research papers covering 20 thousand participants found an influence on beliefs but the effects were weakened by participants’ preexisting beliefs, ideology and knowledge. 

    Sander van der Linden, a social psychologist at the University of Cambridge told Nature that ideally, people wouldn’t form misperceptions in the first place but “if we have to work with the fact that people are already exposed, then reducing it is almost as good as it as it’s going to get”. 

    Hernández-Echevarría said that although the loss of Meta’s funding will be a hard hit to some organizations in the fact-checking community, it won’t end the movement. He said, “They are going to be here, fighting disinformation. No matter what, they will find a way to do it. They will find support. They will do it because their central mission is to fight disinformation.”


    Questions to consider:

    • What is now allowed under Meta’s new rules for posts that wasn’t previously?

    • How is fact-checking not the same as censorship?

    • When you read social media posts, do you care if the poster is telling the truth?


     

    Source link

  • A higher education institution’s relationship with technology crosses all its missions

    A higher education institution’s relationship with technology crosses all its missions

    Universities have a critical role to play at the intersection of academic thought, organisational practice, and social benefits of technology.

    It’s easy when thinking about universities’ digital strategies to see that as a technical question of organisational capability and solutions rather than one part of the wider public role universities have in leading thinking and shaping practice for the benefit of society.

    But for universities the relationship with technology is multifaceted: some parts of the institution are engaged in driving forward technological developments; others may be critically assessing how those developments reshape the human experience and throw up ethical challenges that must be addressed; while others may be seeking to deploy technologies in the service of improving teaching and research. The question, then, for universities, must be how to bring these relationships together in a critical but productive way.

    Thinking into practice

    The University of Edinburgh hosts one of the country’s foremost informatics and computer science departments, one of the largest centres of AI research in Europe. Edinburgh’s computing infrastructure has lately hit headlines when the Westminster government decided to cancel planned investment in a new supercomputing facility at the university, only to announce new plans for supercomputing investment in last week’s AI opportunities action plan, location as yet undetermined.

    But while the university’s technological research prowess is evident, there’s also a strong academic tradition of critical thought around technology – such as in the work of philosopher Shannon Vallor, director of the Centre for Technomoral Futures at the Edinburgh Futures Institute and author of The AI Mirror. In the HE-specific research field, Janja Komljenovic has explored the phenomenon of the “datafication” of higher education, raising questions of a mismatch and incoherence between how data is valued and used in different parts of an institution.

    When I speak to Edinburgh’s principal Peter Mathieson ahead of his keynote at the upcoming Kortext Live leaders event in Edinburgh on 4 February he’s reflecting on a key challenge: how to continue a legacy of thought leadership on digital technology and data science into the future, especially when the pace of technological change is so rapid?

    “It’s imperative for universities to be places that shape the debate, but also that study the advantages and disadvantages of different technologies and how they are adopted. We need to help the public make the best use of technology,” says Peter.

    There’s work going on to mobilise knowledge across disciplines, for example, data scientists interrogating Scotland’s unique identifier data to gain insights on public health – which was particularly important during Covid. The university is a lead partner in the delivery of the Edinburgh and south east Scotland city region deal, a key strand of which is focused on data-driven innovation. “The city region deal builds on our heritage of excellence in AI and computer science and brings that to addressing the exam question of how to create growth in our region, attract inward investment, and create jobs,” explains Peter.

    Peter is also of the opinion that more could be done to bring university expertise to bear across the education system. Currently the university is working with a secondary school to develop a data science programme that will see secondary pupils graduate with a data science qualification. Another initiative sees primary school classrooms equipped with sensors that detect earth movements in different parts of the world – Peter recounts having been proudly shown a squiggle on a piece of paper by two primary school pupils, which turned out to denote an earthquake in Tonga.

    “Data education in schools is a really important function for universities,” he says.”It’s not a recruiting exercise – I see it as a way of the region and community benefiting from having a research intensive university in their midst.”

    Connecting the bits

    The elephant in the room is, of course, the link between academic knowledge and organisational practice, and where and how those come together in a university as large and decentralised as Edinburgh.

    “There is a distinction between the academic mission and the day to day nuts and bolts,” Peter admits. “There is some irony that we are one of finest computer science institutions but we had trouble installing our new finance system. But the capability we have in a place like this should allow us to feel positive about the opportunities to do interesting things with technology.”

    Peter points to the university-wide enablement of Internet of Things which allows the university to monitor building usage, and which helps to identify where buildings may be under-utilised. As principal Peter also brought together estates and digital infrastructure business planning so that the physical and digital estate can be developed in tandem and with reference to each other rather than remaining in silos.

    “Being able to make decisions based on data is very empowering,” he says. “But it’s important that we think very carefully about what data is anonymised and reassure people we are not trying to operate a surveillance system.” Peter is also interested in how AI could help to streamline large administrative tasks, and the experimental deployment of generative AI across university activity. The university has developed its own AI innovation platform, ELM, the Edinburgh (access to) Language Models, which is free to use for all staff and students, and which gives the user access to large language models including the latest version of Chat-GPT but, importantly, without sharing user data with OpenAI.

    At the leadership level, Peter has endeavoured to put professional service leaders on the same footing as academic leaders rather than, as he says, “defining professional services by what they are not, ie non-academic.” It’s one example of the ways that roles and structures in universities are evolving, not necessarily as a direct response to technological change, but with technology being one of the aspects of social change that create a need inside universities for the ability to look at challenges from a range of professional perspectives.

    It’s rarely as straightforward as “automation leading to staffing reductions” though Peter is alive to the perceived risks and their implications. “People worry about automation leading to loss of jobs, but I think jobs will evolve in universities as they will elsewhere in society,” he says. “Much of the value of the university experience is defined by the human interactions that take place, especially in an international university, and we can’t replace physical presence on campus. I’m optimistic that humans can get more good than harm out of AI – we just need to be mindful that we will need to adapt more quickly to this innovation than to earlier technological advances like the printing press, or the Internet.”

    This article is published in association with Kortext. Peter Mathieson will be giving a keynote address at the upcoming Kortext LIVE leaders’ event in Edinburgh on 4 February – join us there or at the the London or Manchester events on 29 January and 6 February to find out more about Wonkhe and Kortext’s work on leading digital capability for learning, teaching and student success, and be part of the conversation.

    Source link

  • Tech Integration Teacher, What time is it?

    Tech Integration Teacher, What time is it?

    Tech Integration Teacher, What time is it?

    When someone asks what time it is, that person wants to know the time, not the history of the clock, not how a clock works, and not what other types of clocks there are. Classroom teachers want to help their students improve their academic learning through technology. Sometimes they need help with technology so they go to a technology integration teacher. Any technology integration teacher should offer direct instruction in exactly what the teacher wants. A friend of mine just came back from a one-on-one training session on a management system. He felt lost since the technology integration person showed all the possibilities and my friend became very confused as to what to actually do. My friend felt like the technology integration teacher was boasting about all she knew about the program. My friend did not come away knowing how to use the program. He did not have what he needed for his class. He did not know the time.

    How do you, technology integration teachers, instruct teachers in exactly what they need in a simple manner so that they can spend their time on helping students learn their subject area instead of their spending hours trying to figure out how to use a technology?

    Source link

  • Curriculum Focus, Not Technology Focus

    Curriculum Focus, Not Technology Focus

    Curriculum Focus, Not Technology Focus

    In my public school career I have been a classroom teacher, a technology integration specialist and a technology administrator. In my technology role, I served under the Assistant Superintendent for Instruction. She had a simple mission: Improve students’ academic learning. My mission was equally simple: Improve students’ academic learning through technology. I met with the curriculum chairs to learn about the curriculum, how it was taught, and areas in which teachers and students had the most difficulty. When I met with grade level or curriculum teacher teams, we talked about the curriculum. After carefully listening to them, I usually would suggest some technology tool that might help them in doing their favorite project or in teaching those difficult curriculum areas. I often would have a mock student product to show the teachers what the student learning with technology would look like. I focused on student learning, not on technology.

    Likewise, when my Technology department provided professional development, we focused on curriculum such as “Inquiry Science,” “Collaborative Math Projects,” and “A New Look at the Writing Process.” We offered curriculum workshops that involved technology. Usually, the technology transformed the learning process.

    People in  the educational technology  field are most effective when they focus foremost on student  academic learning; they are least effective when they “sell” technology to teachers.

    Source link