Category: Social Media

  • ‘It’s Not the Magic Pill, But it Will Help’ – The 74

    ‘It’s Not the Magic Pill, But it Will Help’ – The 74


    Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter

    The parents of slain Fishers junior Hailey Buzbee called on Indiana lawmakers to limit minors’ access to social media after their daughter’s death was linked to a 39-year-old man she spoke to online.

    The original version of SB 199 would have banned social media operators from allowing Hoosier children to make accounts on their platforms and limited access for older teenagers. But this language was stripped in the Senate.

    Now, House lawmakers are considering adding a version of the restriction back with an amendment.

    Speaking at the House Education Committee Wednesday in support of the amendment, Beau Buzbee said 17-year-old Hailey had been lured away from their home by an online predator last month. Law enforcement announced Feb. 1 that she is believed to be deceased and that an Ohio man was arrested in connection with her disappearance.

    Buzbee said their experience showed glaring gaps in Indiana law that needed to be addressed.

    “We are losing the fight to protect our children. The internet and social media are the devils’ and predators’ playgrounds, and it’s on this front that we must fight,” Buzbee told lawmakers. “Please do not let this opportunity slip away.”

    Supporters of Hailey’s Law have also called for schools to provide mandatory updated predator education and for updates to the state’s missing person alert system. Lawmakers said on Monday they would add an expansion to the alert system as an amendment to HB 1303, a bill that increases the penalties for child exploitation, and that they would discuss adding more education to the existing health standards.

    Indiana legislators previously considered — but ultimately failed to advance — a social media ban for minors under 14 and restrictions for those under 17 this year.

    The most recent iteration of the ban is the amendment to SB 199, which requires social media providers to estimate the age of an account user and seek permission from the parents of users under 16. For minor accounts, the amendment forbids social media providers from using an algorithmic feed or selling data for advertising purposes, restricts who can contact the user, and gives parents monitoring tools.

    Critics have raised First Amendment concerns as well as the possibility that the state will be drawn into an extended legal challenge over the law.

    But supporters of a restriction on social media, including Secretary of Education Katie Jenner, say the state must act to address the risks of social media to children and teens the way it does for other dangerous activities, like tobacco use. Social media use is linked to depression, irregular sleep, and a lack of physical activity and social emotional support, said State Health Commissioner Lindsay Weaver. And these issues spill over to classrooms and affect learning, school leaders said.

    House lawmakers heard hours of testimony overwhelmingly in support of the language on Monday, but did not take action to add it to the bill.

    Supporters of the amendment included South Bend student Rima Bahradine-Bell, who said social media use promises community and affirmation but actually leads to comparison and dependency.

    “I’m coming to you as a teenager and a high schooler, and I’m telling you that I would have liked to not have any social media at that age,” she said. “My friends are telling me to tell you that we did not want this.”

    Amy Klink, a school counselor at Guerin Catholic High School, said she frequently speaks to students experiencing mental health crises as a result of social media and to their parents, who struggle to restrict social media access.

    “Even when parents are aware of a social media account, they can’t be aware of every account with a new name. Parental verification could help with this,” Klink said. “It’s not the magic pill, but it will help.”

    SB 199 will return to the House Education Committee on Wednesday for lawmakers to amend and vote.

    Aleksandra Appleton covers Indiana education policy and writes about K-12 schools across the state. Contact her at [email protected].

    Chalkbeat is a nonprofit news site covering educational change in public schools.


    Did you use this article in your work?

    We’d love to hear how The 74’s reporting is helping educators, researchers, and policymakers. Tell us how

    Source link

  • Funding Issues Make Student Devices Hard to Replace, DPI Says – The 74

    Funding Issues Make Student Devices Hard to Replace, DPI Says – The 74


    Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter

    A new Department of Public Instruction (DPI) report says that 100% of traditional public school districts currently have a 1-to-1 digital device-to-student ratio, though many districts are struggling to replace old or damaged devices due to a lack of funding.

    Dr. Ashley McBride, a digital learning initiative consultant at DPI, presented the Statewide Trends in Student Digital Learning Access report at the State Board of Education meeting on Wednesday.

    The report compiles data on students’ access to digital devices in and out of school, as well as their out-of-school internet access, from 115 school districts and 239 charter, lab, and regional schools. Among those 239 nontraditional schools, 84% had a 1-to-1 digital device-to-student ratio.

    The report says that in total, these public school units had 1,190,045 digital devices available for students in 2024-25. Chrome devices make up 90.3% of this fleet; 8.7% were Windows devices, and Apple devices made up 1%.

    Students can take less than half of these devices home, as 56% of them must stay on school campuses.

    “Together, these findings demonstrate that North Carolina continues to rely heavily on school-issued, portable devices to support both in-school instruction and extended learning opportunities beyond the school day,” the report says.

    The report also included findings from a survey on out-of-school devices with responses from families representing 55,082 students.

    In this sample, 42% of families said their student uses a school-provided device at home, while a third said their student uses a device owned by the family. Around one in five families reported that their student has access to both family-owned and school-provided devices at home. However, 4% of families reported their student does not have access to a digital device at home.

    Families who did not have devices at home said they were too expensive, they chose not to purchase one, or the devices they owned were broken, damaged, or outdated, according to the report.

    A survey with 36,365 respondent families found that 93% had consistent and adequate internet access for their students at home. Families with limited or no access to the internet at home said that was due to high costs or the internet connection not being dependable.

    Still, those families described several alternatives they use to ensure their students can access the internet, including using the internet at public libraries, hot spots, other people’s homes, school parking lots, among other options.

    “My rural county, still one third of it, does not have internet capability. And after Helene, many parts of our community do not have Wi-Fi coverage, nor do they have cell coverage. That’s typical in the western part of the state,” said Board member John Blackburn, who represents the state’s Northwest region. “I just want to remind everybody that there are still points of darkness in the state of North Carolina.”

    Beckie Spears, the 2024 Wells Fargo Principal of the Year, said that her rural elementary school had one Chromebook cart per grade level prior to 2020. Now, there’s one in every classroom, she said, but the devices are aging and the district doesn’t “have any ways to replace them.”

    “The reality is we have stretched every resource as far as we can, and in Tier 1 counties and Tier 2 counties where local funds are not accessible, this is a real and urgent problem that needs attention from our legislators,” Spears said.

    The report says that these findings highlight the importance of school-provided digital devices for students. But since pandemic-era funding from the federal Elementary and Secondary School Emergency Relief Fund (ESSER) and the Emergency Connectivity Funds (ECF) has ended, many schools are struggling to sustain student device programs.

    McBride’s presentation said 88 out of the state’s traditional school districts — nearly 77% — as well as 97 charter, lab, and regional schools, don’t have dedicated funds to refresh students’ school-provided digital devices.

    “Large portions of the current device fleet have aged beyond expected lifespans, resulting in higher failure rates, declining performance, and reduced reliability for both classroom and at home use,” the report says.

    The report says some schools have limited or stopped take-home access for their device fleets because they don’t have inventory to replace them.

    According to McBride, prior to ESSER funding, only 16 school districts had a 1-to-1 digital device-to-student ratio.

    DPI recommends that the state allocate recurring funding to support student device programs to reduce reliance on short-term federal funding, according to the report. This legislative session, DPI requested $152.6 million in recurring funds for a 1-to-1 device refresh over a four-year period.

    The report also recommends providing statewide guidance on devices’ life cycle management, including cost considerations and multiyear budgeting strategies. The department also recommends using data systems to track devices’ age, availability, and take-home capacity, and “exploring how to improve parental participation in reporting on home connectivity and device access.”

    This article first appeared on EdNC and is republished here under a Creative Commons Attribution-NoDerivatives 4.0 International License.


    Did you use this article in your work?

    We’d love to hear how The 74’s reporting is helping educators, researchers, and policymakers. Tell us how

    Source link

  • Iowa Teacher Committed Misconduct With His Anti-Kirk Facebook Posts – The 74

    Iowa Teacher Committed Misconduct With His Anti-Kirk Facebook Posts – The 74


    Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter

    An administrative law judge has ruled that an Iowa school teacher committed job-related misconduct when he posted negative Facebook comments about conservative activist Charlie Kirk.

    Matthew Kargol worked for the Oskaloosa Community School District as an art teacher and coach until he was fired in September 2025. Kargol then filed for unemployment benefits and the district resisted, which led to a recent hearing before Administrative Law Judge David Steen.

    In his written factual findings of the case, Steen reported that on Sept. 10, 2025, Kargol had posted a comment to Facebook stating, “1 Nazi down.” That comment was posted within hours of authorities confirming Kirk had been shot and killed that day while speaking at Utah Valley University in Orem, Utah.

    When another Facebook user commented, “What a s—-y thing to say,” Kargol allegedly replied, “Yep, he was part of the problem, a Nazi.”

    Steen reported that Kargol posted his comments around 5 p.m. and then deleted them within an hour. By 6 p.m., the district began fielding a number of telephone calls and text messages from members of the public, Steen found.

    According to Steen’s findings, the district’s leadership team met that evening and included Kargol via telephone conference call. District leaders asked Kargol to resign, and he declined, after which the district officials said they were concerned for his safety due to the public’s reaction to his comments.

    The district placed Kargol on administrative leave that evening, Steen found. The next day, district officials fielded roughly 1,500 telephone calls and received 280 voicemail messages regarding Kargol’s posts.

    “These calls required the employer to redirect staff and other resources from their normal duties,” Steen stated in his ruling. “The employer also requested additional law enforcement presence at school facilities due to the possibility of physical threats, which some of the messages alluded to. The employer continued to receive numerous communications from the public for days after the post was removed.”

    On Sept. 16, 2025, Superintendent Mike Fisher submitted a written recommendation to the school board to fire Kargol, with the two primary reasons cited as a disruption to the learning environment and a violation of the district’s code of ethics. Upon Fisher’s recommendation, the board fired Kargol on Sept. 17, 2025.

    According to Steen’s findings, the district calculated the cost of its response to the situation was $14,332.10 – and amount that includes the wages of the regular staff who handled the phone calls and other communications.

    As for the ethics-policy violation, Steen noted that the policy states that employees “are representatives of the district at all times and must model appropriate character, both on and off the worksite. This applies to material posted with personal devices and on personal websites and/or social media accounts.”

    The policy goes on to say that social media posts “which diminish the professionalism” of the district may result in disciplinary action, including termination, if it is found to be disruptive to the educational environment.

    The district, Steen noted, also has a policy on “employee expression” that states “the First Amendment protects a public employee’s speech when the employee is speaking as an individual citizen on a matter of public concern,” but that “even so, employee expression that has an adverse impact on district operations and/or negatively impacts an employee’s ability to perform their job for the district may still result in disciplinary action up to and including termination.”

    Based on the policies and Kargol’s conduct, Steen concluded the district fired Kargol for job-related misconduct that disqualified him from collecting unemployment benefits.

    The issue before him, Steen observed, wasn’t whether the district made a correct decision in firing Kargol, but whether Kargol is entitled to unemployment insurance benefits under Iowa law.

    In ruling against Kargol on that issue, Steen noted Kargol was aware of district policies regarding social media use as well as work rules that specifically state employees are considered representatives of the school district at all times.

    Kargol’s posts, Steen ruled, “reflected negatively on the employer and were against the employer’s interests.” The posts also “caused substantial disruption to the learning environment, causing staff at all levels to need to redirect focus and resources on the public’s response for days after the incident,” Steen stated.

    Kargol’s federal lawsuit against the school district, alleging retaliation for exercising his First Amendment right to expression, is still working its way through the courts.

    In that lawsuit, Kargol argues that in comments made last fall, Fisher made clear that his condemnation of Kargol’s Facebook posts “was rooted in his personal beliefs, not in evidence of disruption. Speaking as ‘a man of faith,’ Fisher expressed disappointment in the state of society and disapproval of Mr. Kargol’s expression. By invoking his personal religious identity in condemning Mr. Kargol’s speech, Fisher confirmed that his reaction was based on his own values and ideology, not on legitimate pedagogical concerns.”

    The district has denied any wrongdoing in that case. A trial date has yet to be scheduled.

    Several other lawsuits have been filed against their former employers by Iowa educators, a public defender and a paramedic, all of whom allege they were fired or sanctioned for online comments posted in the immediate aftermath of Kirk’s death.

    Earlier this week, two Iowa teachers sued the state’s teacher-licensing board and its executive director, alleging they improperly solicited complaints related to anti-Kirk social media posts.

    Iowa Capital Dispatch is part of States Newsroom, a nonprofit news network supported by grants and a coalition of donors as a 501c(3) public charity. Iowa Capital Dispatch maintains editorial independence. Contact Editor Kathie Obradovich for questions: [email protected].


    Did you use this article in your work?

    We’d love to hear how The 74’s reporting is helping educators, researchers, and policymakers. Tell us how

    Source link

  • Disinformation and the decline of democracy

    Disinformation and the decline of democracy

    The unprecedented mob assault on the U.S. Capitol on January 6 represents perhaps the most stunning collision yet between the world of online disinformation and reality.

    The supporters of U.S. President Donald Trump who broke into Congress did so in the belief that the U.S. election was stolen from them after weeks of consuming unproven narratives about “ballot dumps,” manipulated voting machines and Democrat big-city corruption. Some — including the woman who was shot dead — were driven by the discredited QAnon conspiracy theory that represents Democratic Party elites as a pedophile ring and Trump as the savior.

    It’s tempting to hope that disinformation and its corrosive effects on democracy may have reached a high-water mark with the events of January 6 and the end of Trump’s presidency. But trends in technology and society’s increasing separation into social media echo chambers suggest that worse may be to come.

    Imagine for a moment if video of the Capitol riot had been manipulated to replace the faces of Trump supporters with those of known protestors for antifa, a left-wing, anti-fascist and anti-racist political movement. This would have bolstered the unproven story that has emerged about a “false flag” operation. Or imagine if thousands of different stories written by artificial intelligence software and pedaling that version of events had flooded social media and been picked up by news organizations in the hours after the assault.

    That technology not only exists. It’s getting more sophisticated and easier to access by the day.

    Trust in democracy is eroding.

    Deepfake, or synthetic, videos are starting to seep from pornography — where they’ve mostly been concentrated — into the world of politics. A deepfake of former President Barack Obama using an expletive to describe Trump has garnered over eight million views on YouTube since it was released in 2018.

    Most anyone familiar with Obama’s appearance and speaking style can tell there’s something amiss with that video. But two years is an eternity in AI-driven technology and many experts believe it will soon be impossible for the human eye and ear to spot the best deepfakes.

    A deepfake specialist was hailed early last year for using freely available software to “de-age” Robert DeNiro and Joe Pesci in the movie “The Irishman,” producing a result that many critics considered superior to the work of the visual-effects supervisor in the actual film.

    In recent years, the sense of shared, objective reality and trust in institutions have already come under strain as social media bubbles hasten the spread of fake news and conspiracy theories. The worry is that deepfakes and other AI-generated content will supercharge this trend in coming years.

    “This is disastrous to any liberal democratic model because in a world where anything can be faked, everyone becomes a target,” Nina Schick, the author of “Deepfakes — The Coming Infopocalypse,” told U.S. author Sam Harris in a recent podcast.

    “But even more than that, if anything can be faked … everything can also be denied. So the very basis of what is reality starts to become corroded.”

    Governments must do more to combat disinformation.

    Illustrating her point is reaction to Trump’s video statement released a day after the storming of Congress. While some of his followers online saw it as a betrayal, others reassured themselves by saying it was a deepfake.

    On the text side, the advent of GPT-3 — an AI program that can produce articles indistinguishable from those written by humans — has potentially powerful implications for disinformation. Writing bots could be programmed to produce fake articles or spew political and racial hatred at a volume that could overwhelm text based on facts and moderation.

    Society has been grappling with written fake news for years and photographs have long been easily manipulated through software. But convincingly faked videos and AI-generated stories seem to many to represent a deeper, more viral threat to reality-based discourse.

    It’s clear that there’s no silver-bullet solution to the disinformation problem. Social media platforms like Facebook have a major role to play and are developing their own AI technology to better detect fake content. While fakers are likely to keep evolving to stay ahead, stricter policing and quicker action by online platforms can at least limit the impact of false videos and stories.

    Governments are coming under pressure to push Big Tech into taking a harder line against fake news, including through regulation. Authorities can devote more funding to digital media literacy programs in schools and elsewhere to help individuals become more alert and proficient in identifying suspect content.

    When it comes down to it, the real power of fake news hinges on those who believe it and spread it.


    Questions to consider:

    1. How can technology be used to spread fake news?

    2. Why is disinformation potentially harmful to democracy?

    3. How do you think the rise of AI technology will affect the type of information people consume?

    Source link

  • A week of media literacy across the globe

    A week of media literacy across the globe

    From 24 to 31 October, the world marks Global Media and Information Literacy Week, an annual event first launched by UNESCO in 2011 as a way for organizations around the world to share ideas and explore innovative ways to promote media and information literacy for all. This year’s theme is Minds Over AI — MIL in Digital Spaces. 

    To join in the global conversation, over the next week News Decoder will present a series of articles that look at media literacy in different ways.

    Today, we give you links to articles we’ve published over the past year on topics that range from fact-checking and information verification to the power of social media and the good and bad of artificial intelligence. 

    Source link

  • How unis can do more on social media – Campus Review

    How unis can do more on social media – Campus Review

    Too many universities overlook the richness of the human stories that define them, relying instead on polished marketing campaigns and generic social media content to attract the next generation of students.

    Please login below to view content or subscribe now.

    Membership Login

    Source link

  • A decade getting teens to do something many avoid: Think

    A decade getting teens to do something many avoid: Think

    TikTok reels, attention grabbing headlines and AI that spits out instant answers. The way teens engage with the world today often lacks depth. 

    But at this age, where the teen brain is rapidly developing, deeper thinking — dubbed by psychologists as transcendent thinking — is vital not just for self-reflection and problem solving, but also self-esteem and better relationships in adulthood. 

    Many of us spend a lot of time in what researchers call “surface-level” thinking — reacting to what’s in front of us. Transcendent thinking, though, is when we go beyond concept descriptions, to wrestle with questions like, What does this say about justice? How do systems work? Where do I fit in all this? 

    “Young people get so much information fed to them through social media, influencers and podcasts and AI, and this is such a passive way of learning,” said Marcy Burstiner, News Decoder’s educational news director. “Dangerous, really, if they aren’t critically thinking about the information they are getting.”

    That’s why for 10 years News Decoder has used the lens of journalism to engage students in the process of learning. Through our educational programs, students are encouraged to ask big questions, identify problems they see around them and talk to people to get their questions answered — classmates, neighbours, family and experts. In doing this, they find out information themselves. 

    Spoon-fed learning

    This is more important than ever as the internet transforms from a place where people would lose themselves as they “surfed,” stumbling upon all kinds of new and interesting information along the way, into a place where an AI bot does that for them and spits out summarized results. 

    For 10 years, we’ve been asking teens to find real people to interview, to compare their different perspectives and from that to come up with their own original thoughts about complex topics where there isn’t a clear right and wrong, where there are layers of inequity. 

    Student Jack McConnel at The Tatnall School in the United States did this when he interviewed his state’s congressional representative, Sarah McBride, the nation’s first transgender representative in Congress.

    Through the research he did, and after interviewing McBride, McConnel came to the conclusion that voters in his district didn’t elect her because of gender identity, but because McBride pledged to help solve the more mundane issues they cared most about — protecting consumers from getting scammed, for example, or helping farmers to lower food prices. Gender identity wasn’t their most important concern.

    Students, like McConnel, who work with News Decoder often start with a “pitch” — a proposal for a news story. In the pitch, we have them ask a big question that their story will answer. McConnel’s asked three: “What role does identity play in our elected officials? Has this fixation from both sides made congressional and senatorial positions simply for show? Does it matter more who the person is or what the person does, and have we lost sight of what matters about our politicians?”

    Through his research and his one-to-one interview with McBride, he was able to answer all those questions. 

    Beyond facts

    Hannah Choo is a student at an international school in South Korea, and is working with News Decoder as a summer intern. As part of her work, she creates video content for social media based on articles published on News Decoder.

    Choo has found that through engaging with these stories, she’s forming a deeper connection with the issues the stories explore. She said the challenge is to go beyond merely summarizing the information. The goal is to connect with an audience. 

    “And that puts me in a position where I need to really focus on why this issue matters and why I should care,” Choo said. “And that gives a lot more of a sense of purpose.” 

    Choo remembers talking to a biology graduate student, who told her about apoptosis, a process whereby cells die off — a way our bodies get rid of unneeded cells. Alone, this concept feels meaningless, even dry. 

    But the grad student told Choo that when we’re initially formed in the womb, we have paddle-shaped hands with a webbing of skin connecting the fingers and toes. This webbing disappears as we form, due to this apoptosis. Choo remembers looking at her own hands in fascination.

    “And so later, when I actually got to learn biology and learn about the cell cycle, it was a lot easier for me to engage with the topic,” Choo said. “I wasn’t just studying science but I was studying my own body.” 

    From deep thinking to deeper relationships

    A five-year study, published in 2024 in the journal Scientific Reports, followed 65 teenagers aged 14-18 to see how transcendent thinking shapes their brains, and how this further shapes their lives.  

    The teens were shown emotionally rich mini-documentaries featuring real stories of adolescents around the globe — a method that triggers transcendent thinking. They then talked through what the stories meant: how they felt, why they mattered, and what bigger ideas they raised. 

    They found that teens who engaged in this deeper style of thinking showed stronger connections over time between two key brain networks — those involved in self-reflection and big-picture thought, and focus and problem solving. 

    Crucially, they also found that these teens went on to have a clearer sense of identity in late adolescence, which later linked to greater self-esteem and better relationships in young adulthood.

    One way News Decoder helps young people understand deeper meanings and broader implications is by having them look at societal problems and possible solutions. 

    Searching out solutions

    At News Decoder we ask students to identify a problem in their community and then see if they can find people working to solve that problem. 

    “In the process they see at first that a lot of problems seem to have no solution or the solutions are so far off,” Burstiner said. “But all the complications that prevent solutions are like protective layers around the problem. They are like the levels you need to surmount in a video game.”

    If a teen has the patience and persistence to work through those complications they can not only see the solutions but they can see what is preventing those solutions, Burstiner said. 

    One News Decoder student in India wondered what might happen when climate change causes massive migration. 

    “In exploring the topic she hit on the idea of lost languages — that a language is what often ties a community together and connects generations. But if a community is forced to disperse and the people end up integrating into other lands, the language that connected them could die out,” Burstiner said.

    Connecting dots 

    Another student at The Tatnall School played soccer, and began thinking about how much it cost his family for him to play at a competitive level. “In exploring this he realized how much of competitive sports is elitist and how much more difficult it is for someone to go into professional sports if they are poor,” Burstiner said. 

    When students conduct interviews with people who understand these topics in-depth or who are affected by these issues, they can further connect their sense of self with these stories. 

    Choo, during her internship, pitched a story about cancer, because a close family member was undergoing cancer treatment. She asked this question: “How does climate change affect the quality of healthcare for cancer patients?”  

    In doing the research, uncovering connections and conducting interviews, she connected the often-abstract issue of climate change to her own life. 

    “This was the first time I could really connect climate change to my own life and my own loved ones,” Choo said.

    Source link

  • What really shapes the future of AI in education?

    What really shapes the future of AI in education?

    This post originally appeared on the Christensen Institute’s blog and is reposted here with permission.

    Key points:

    A few weeks ago, MIT’s Media Lab put out a study on how AI affects the brain. The study ignited a firestorm of posts and comments on social media, given its provocative finding that students who relied on ChatGPT for writing tasks showed lower brain engagement on EEG scans, hinting that offloading thinking to AI can literally dull our neural activity. For anyone who has used AI, it’s not hard to see how AI systems can become learning crutches that encourage mental laziness.

    But I don’t think a simple “AI harms learning” conclusion tells the whole story. In this blog post (adapted from a recent series of posts I shared on LinkedIn), I want to add to the conversation by tackling the potential impact of AI in education from four angles. I’ll explore how AI’s unique adaptability can reshape rigid systems, how it both fights and fuels misinformation, how AI can be both good and bad depending on how it is used, and why its funding model may ultimately determine whether AI serves learners or short-circuits their growth.

    What if the most transformative aspect of AI for schools isn’t its intelligence, but its adaptability?

    Most technologies make us adjust to them. We have to learn how they work and adapt our behavior. Industrial machines, enterprise software, even a basic thermostat—they all come with instructions and patterns we need to learn and follow.

    Education highlights this dynamic in a different way. How does education’s “factory model” work when students don’t come to school as standardized raw inputs? In many ways, schools expect students to conform to the requirements of the system—show up on time, sharpen your pencil before class, sit quietly while the teacher is talking, raise your hand if you want to speak. Those social norms are expectations we place on students so that standardized education can work. But as anyone who has tried to manage a group of six-year-olds knows, a class of students is full of complicated humans who never fully conform to what the system expects. So, teachers serve as the malleable middle layer. They adapt standardized systems to make them work for real students. Without that human adaptability, the system would collapse.

    Same thing in manufacturing. Edgar Schein notes that engineers aim to design systems that run themselves. But operators know systems never work perfectly. Their job—and often their sense of professional identity—is about having the expertise to adapt and adjust when things inevitably go off-script. Human adaptability in the face of rigid systems keeps everything running.

    So, how does this relate to AI? AI breaks the mold of most machines and systems humans have designed and dealt with throughout history. It doesn’t just follow its algorithm and expect us to learn how to use it. It adapts to us, like how teachers or factory operators adapt to the realities of the world to compensate for the rigidity of standardized systems.

    You don’t need a coding background or a manual. You just speak to it. (I literally hit the voice-to-text button and talk to it like I’m explaining something to a person.) Messy, natural human language—the age-old human-to-human interface that our brains are wired to pick up on as infants—has become the interface for large language models. In other words, what makes today’s AI models amazing is their ability to use our interface, rather than asking us to learn theirs.

    For me, the early hype about “prompt engineering” never really made sense. It assumed that success with AI required becoming an AI whisperer who knew how to speak AI’s language. But in my experience, working well with AI is less about learning special ways to talk to AI and more about just being a clear communicator, just like a good teacher or a good manager.

    Now imagine this: what if AI becomes the new malleable middle layer across all kinds of systems? Not just a tool, but an adaptive bridge that makes other rigid, standardized systems work well together. If AI can make interoperability nearly frictionless—adapting to each system and context, rather than forcing people to adapt to it—that could be transformative. It’s not hard to see how this shift might ripple far beyond technology into how we organize institutions, deliver services, and design learning experiences.

    Consider two concrete examples of how this might transform schools. First, our current system heavily relies on the written word as the medium for assessing students’ learning. To be clear, writing is an important skill that students need to develop to help them navigate the world beyond school. Yet at the same time, schools’ heavy reliance on writing as the medium for demonstrating learning creates barriers for students with learning disabilities, neurodivergent learners, or English language learners—all of whom may have a deep understanding but struggle to express it through writing in English. AI could serve as that adaptive layer, allowing students to demonstrate their knowledge and receive feedback through speech, visual representations, or even their native language, while still ensuring rigorous assessment of their actual understanding.

    Second, it’s obvious that students don’t all learn at the same pace—yet we’ve forced learning to happen at a uniform timeline because individualized pacing quickly becomes completely unmanageable when teachers are on their own to cover material and provide feedback to their students. So instead, everyone spends the same number of weeks on each unit of content and then moves to the next course or grade level together, regardless of individual readiness. Here again, AI could serve as that adaptive layer for keeping track of students’ individual learning progressions and then serving up customized feedback, explanations, and practice opportunities based on students’ individual needs.

    Third, success in school isn’t just about academics—it’s about knowing how to navigate the system itself. Students need to know how to approach teachers for help, track announcements for tryouts and auditions, fill out paperwork for course selections, and advocate for themselves to get into the classes they want. These navigation skills become even more critical for college applications and financial aid. But there are huge inequities here because much of this knowledge comes from social capital—having parents or peers who already understand how the system works. AI could help level the playing field by serving as that adaptive coaching layer, guiding any student through the bureaucratic maze rather than expecting them to figure it out on their own or rely on family connections to decode the system.

    Can AI help solve the problem of misinformation?

    Most people I talk to are skeptical of the idea in this subhead—and understandably so.

    We’ve all seen the headlines: deep fakes, hallucinated facts, bots that churn out clickbait. AI, many argue, will supercharge misinformation, not solve it. Others worry that overreliance on AI could make people less critical and more passive, outsourcing their thinking instead of sharpening it.

    But what if that’s not the whole story?

    Here’s what gives me hope: AI’s ability to spot falsehoods and surface truth at scale might be one of its most powerful—and underappreciated—capabilities.

    First, consider what makes misinformation so destructive. It’s not just that people believe wrong facts. It’s that people build vastly different mental models of what’s true and real. They lose any shared basis for reasoning through disagreements. Once that happens, dialogue breaks down. Facts don’t matter because facts aren’t shared.

    Traditionally, countering misinformation has required human judgment and painstaking research, both time-consuming and limited in scale. But AI changes the equation.

    Unlike any single person, a large language model (LLM) can draw from an enormous base of facts, concepts, and contextual knowledge. LLMs know far more facts from their training data than any person can learn in a lifetime. And when paired with tools like a web browser or citation database, they can investigate claims, check sources, and explain discrepancies.

    Imagine reading a social media post and getting a sidebar summary—courtesy of AI—that flags misleading statistics, offers missing context, and links to credible sources. Not months later, not buried in the comments—instantly, as the content appears. The technology to do this already exists.

    Of course, AI is not perfect as a fact-checker. When large language models generate text, they aren’t producing precise queries of facts; they’re making probabilistic guesses at what the right response should be based on their training, and sometimes those guesses are wrong. (Just like human experts, they also generate answers by drawing on their expertise, and they sometimes get things wrong.) AI also has its own blind spots and biases based on the biases it inherits from its training data. 

    But in many ways, both hallucinations and biases in AI are easier to detect and address than the false statements and biases that come from millions of human minds across the internet. AI’s decision rules can be audited. Its output can be tested. Its propensity to hallucinate can be curtailed. That makes it a promising foundation for improving trust, at least compared to the murky, decentralized mess of misinformation we’re living in now.

    This doesn’t mean AI will eliminate misinformation. But it could dramatically increase the accessibility of accurate information, and reduce the friction it takes to verify what’s true. Of course, most platforms don’t yet include built-in AI fact-checking, and even if they did, that approach would raise important concerns. Do we trust the sources that those companies prioritize? The rules their systems follow? The incentives that guide how their tools are designed? But beyond questions of trust, there’s a deeper concern: when AI passively flags errors or supplies corrections, it risks turning users into passive recipients of “answers” rather than active seekers of truth. Learning requires effort. It’s not just about having the right information—it’s about asking good questions, thinking critically, and grappling with ideas. That’s why I think one of the most important things to teach young people about how to use AI is to treat it as a tool for interrogating the information and ideas they encounter, both online and from AI itself. Just like we teach students to proofread their writing or double-check their math, we should help them develop habits of mind that use AI to spark their own inquiry—to question claims, explore perspectives, and dig deeper into the truth. 

    Still, this focuses on just one side of the story. As powerful as AI may be for fact-checking, it will inevitably be used to generate deepfakes and spin persuasive falsehoods.

    AI isn’t just good or bad—it’s both. The future of education depends on how we use it.

    Much of the commentary around AI takes a strong stance: either it’s an incredible force for progress or it’s a terrifying threat to humanity. These bold perspectives make for compelling headlines and persuasive arguments. But in reality, the world is messy. And most transformative innovations—AI included—cut both ways.

    History is full of examples of technologies that have advanced society in profound ways while also creating new risks and challenges. The Industrial Revolution made it possible to mass-produce goods that have dramatically improved the quality of life for billions. It has also fueled pollution and environmental degradation. The internet connects communities, opens access to knowledge, and accelerates scientific progress—but it also fuels misinformation, addiction, and division. Nuclear energy can power cities—or obliterate them.

    AI is no different. It will do amazing things. It will do terrible things. The question isn’t whether AI will be good or bad for humanity—it’s how the choices of its users and developers will determine the directions it takes. 

    Because I work in education, I’ve been especially focused on the impact of AI on learning. AI can make learning more engaging, more personalized, and more accessible. It can explain concepts in multiple ways, adapt to your level, provide feedback, generate practice exercises, or summarize key points. It’s like having a teaching assistant on demand to accelerate your learning.

    But it can also short-circuit the learning process. Why wrestle with a hard problem when AI will just give you the answer? Why wrestle with an idea when you can ask AI to write the essay for you? And even when students have every intention of learning, AI can create the illusion of learning while leaving understanding shallow.

    This double-edged dynamic isn’t limited to learning. It’s also apparent in the world of work. AI is already making it easier for individuals to take on entrepreneurial projects that would have previously required whole teams. A startup no longer needs to hire a designer to create its logo, a marketer to build its brand assets, or an editor to write its press releases. In the near future, you may not even need to know how to code to build a software product. AI can help individuals turn ideas into action with far fewer barriers. And for those who feel overwhelmed by the idea of starting something new, AI can coach them through it, step by step. We may be on the front end of a boom in entrepreneurship unlocked by AI.

    At the same time, however, AI is displacing many of the entry-level knowledge jobs that people have historically relied on to get their careers started. Tasks like drafting memos, doing basic research, or managing spreadsheets—once done by junior staff—can increasingly be handled by AI. That shift is making it harder for new graduates to break into the workforce and develop their skills on the job.

    One way to mitigate these challenges is to build AI tools that are designed to support learning, not circumvent it. For example, Khan Academy’s Khanmigo helps students think critically about the material they’re learning rather than just giving them answers. It encourages ideation, offers feedback, and prompts deeper understanding—serving as a thoughtful coach, not a shortcut. But the deeper issue AI brings into focus is that our education system often treats learning as a means to an end—a set of hoops to jump through on the way to a diploma. To truly prepare students for a world shaped by AI, we need to rethink that approach. First, we should focus less on teaching only the skills AI can already do well. And second, we should make learning more about pursuing goals students care about—goals that require curiosity, critical thinking, and perseverance. Rather than training students to follow a prescribed path, we should be helping them learn how to chart their own. That’s especially important in a world where career paths are becoming less predictable, and opportunities often require the kind of initiative and adaptability we associate with entrepreneurs.

    In short, AI is just the latest technological double-edged sword. It can support learning, or short-circuit it. Boost entrepreneurship—or displace entry-level jobs. The key isn’t to declare AI good or bad, but to recognize that it’s both, and then to be intentional about how we shape its trajectory. 

    That trajectory won’t be determined by technical capabilities alone. Who pays for AI, and what they pay it to do, will influence whether it evolves to support human learning, expertise, and connection, or to exploit our attention, take our jobs, and replace our relationships.

    What actually determines whether AI helps or harms?

    When people talk about the opportunities and risks of artificial intelligence, the conversation tends to focus on the technology’s capabilities—what it might be able to do, what it might replace, what breakthroughs lie ahead. But just focusing on what the technology does—both good and bad—doesn’t tell the whole story. The business model behind a technology influences how it evolves.

    For example, when advertisers are the paying customer, as they are for many social media platforms, products tend to evolve to maximize user engagement and time-on-platform. That’s how we ended up with doomscrolling—endless content feeds optimized to occupy our attention so companies can show us more ads, often at the expense of our well-being.

    That incentive could be particularly dangerous with AI. If you combine superhuman persuasion tools with an incentive to monopolize users’ attention, the results will be deeply manipulative. And this gets at a concern my colleague Julia Freeland Fisher has been raising: What happens if AI systems start to displace human connection? If AI becomes your go-to for friendship or emotional support, it risks crowding out the real relationships in your life.

    Whether or not AI ends up undermining human relationships depends a lot on how it’s paid for. An AI built to hold your attention and keep you coming back might try to be your best friend. But an AI built to help you solve problems in the real world will behave differently. That kind of AI might say, “Hey, we’ve been talking for a while—why not go try out some of the things we’ve discussed?” or “Sounds like it’s time to take a break and connect with someone you care about.”

    Some decisions made by the major AI companies seem encouraging. Sam Altman, OpenAI’s CEO, has said that adopting ads would be a last resort. “I’m not saying OpenAI would never consider ads, but I don’t like them in general, and I think that ads-plus-AI is sort of uniquely unsettling to me.” Instead, most AI developers like OpenAI and Anthropic have turned to user subscriptions, an incentive structure that doesn’t steer as hard toward addictiveness. OpenAI is also exploring AI-centric hardware as a business model—another experiment that seems more promising for user wellbeing.

    So far, we’ve been talking about the directions AI will take as companies develop their technologies for individual consumers, but there’s another angle worth considering: how AI gets adopted into the workplace. One of the big concerns is that AI will be used to replace people, not necessarily because it does the job better, but because it’s cheaper. That decision often comes down to incentives. Right now, businesses pay a lot in payroll taxes and benefits for every employee, but they get tax breaks when they invest in software and machines. So, from a purely financial standpoint, replacing people with technology can look like a smart move. In the book, The Once and Future Worker, Oren Cass discusses this problem and suggests flipping that script—taxing capital more and labor less—so companies aren’t nudged toward cutting jobs just to save money. That change wouldn’t stop companies from using AI, but it would encourage them to deploy it in ways that complement, rather than replace, human workers.

    Currently, while AI companies operate without sustainable business models, they’re buoyed by investor funding. Investors are willing to bankroll companies with little or no revenue today because they see the potential for massive profits in the future. But that investor model creates pressure to grow rapidly and acquire as many users as possible, since scale is often a key metric of success in venture-backed tech. That drive for rapid growth can push companies to prioritize user acquisition over thoughtful product development, potentially at the expense of safety, ethics, or long-term consequences. 

    Given these realities, what can parents and educators do? First, they can be discerning customers. There are many AI tools available, and the choices they make matter. Rather than simply opting for what’s most entertaining or immediately useful, they can support companies whose business models and design choices reflect a concern for users’ well-being and societal impact.

    Second, they can be vocal. Journalists, educators, and parents all have platforms—whether formal or informal—to raise questions, share concerns, and express what they hope to see from AI companies. Public dialogue helps shape media narratives, which in turn shape both market forces and policy decisions.

    Third, they can advocate for smart, balanced regulation. As I noted above, AI shouldn’t be regulated as if it’s either all good or all bad. But reasonable guardrails can ensure that AI is developed and used in ways that serve the public good. Just as the customers and investors in a company’s value network influence its priorities, so too can policymakers play a constructive role as value network actors by creating smart policies that promote general welfare when market incentives fall short.

    In sum, a company’s value network—who its investors are, who pays for its products, and what they hire those products to do—determines what companies optimize for. And in AI, that choice might shape not just how the technology evolves, but how it impacts our lives, our relationships, and our society.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • Sharing is good, except when it isn’t

    Sharing is good, except when it isn’t

    In the wake of the floods in the U.S. state of Texas earlier this month news circulated on social media of two girls being rescued. One of the first posts sharing the story included a screenshot of a post to social media that read:

    Rescuers find 2 girls in tree, 30-feet up, near Comfort

    The dramatic rescue occurred closer to Comfort, which is in Kendall County, witnesses said. The girls were found in the tree during ongoing search operations for victims of Friday’s catastrophic flooding that has killed 59 people across Kerr County.

    A Facebook search of the post’s keywords returned dozens of identical or similarly-worded posts retelling the harrowing rescue. Other versions of the story were also shared across social media platforms like Instagram Threads, as well as in now-deleted articles across various news outlets

    But the story was fabricated. 

    It was a prime example of a type of misinformation known as “copypasta.” 

    Inciting fear

    Social media posts that utilize copypasta — a portmanteau of “copy” and “paste” — are often used to incite fear or evoke emotions, prompting users to like and share the content. These posts are used for various reasons, whether to polarize different political groups further or to attract a broader audience and spread misinformation. 

    Alex Kasprak is an investigative journalist who reported for the digital fact-checking website Snopes for nearly a decade. In his experience, Kasprak says copypasta plays a central role in online misinformation. (For more on Snopes’ take on copypasta, head to this link.) 

    “The simplest way to put it, is that copypasta is a text that you see that is identical or nearly identical posted either with somebody’s name as an author or without it in an identical form on multiple posts such that it’s clear that whoever is posting it copied it from somewhere else,” said Kasprak. 

    “What you end up getting in that sort of phenomenon is a game of telephone.”  

    Copypasta serves as a new-age version of chainmail, seen in the early days of email, which promised good luck for forwarding a message or foretold misinformation if you let the email sit in an inbox. 

    Lacking credibility

    In the case of copypasta, social media users are encouraged to comment, share or tag their friends in a post to boost engagement. Such emotion-evoking messages can serve as an entry point into more polarizing content, which is often rife with false information. 

    To identify copypasta, look for signs of vague or generic information that lacks a credible source or call to action. The way a post is written can also serve as an indication that it may be a copy-and-paste text. 

    “With copypasta, everything generally kind of travels forward, including errors in grammar or mistranslations,” Kasprak said. “If there are weird sentences that just kind of end or don’t fully make grammatical sense, that is an indicator that the tone of the message doesn’t match.”

    If the post is shared by someone that you know on your feed, but the tone is different than how they usually post or talk, the content likely originated from another source — credible or not, Kasprak said. 

    In addition to spreading false information, copypasta can be used as part of bigger campaigns to push particular sentiments or ideologies. For example, back in 2017, U.S. government officials found evidence that Russian “trolls” took to social media and also deployed social media campaigns to connect certain users to various organizations or movements.

    Danger to the infosphere

    During these online campaigns, nefarious actors meddled in the election by posting emotional content to get users to engage, gradually bringing them down a digital rabbit hole of more polarizing issues.

    Kasprak adds that copypasta content also harms the “infosphere,” or public knowledge otherwise rooted in fact. When copypasta becomes widespread and is presented as a “pseudofact,” people begin to cite it as common knowledge. A commonly held belief that many people cite as fact, for example, is that a mother bird will abandon its offspring if a human touches it. Experts agree that this notion is not true. 

    Another tactic behind those who post copypasta is to poison AI models in a similar way that fake news websites do. When enough content on the internet makes a particular claim, AI technologies may focus on this noise and refer to it as fact. In this way, AI programs are “trained” to focus and “believe” those posts over other sources of information.

    Emotion-evoking posts may also fall into the copypasta category if they are not rooted in unbiased facts. If emotional language used in the post immediately sparks anger, sadness or another strong emotion, it may be a fake post. 

    “In general, the big thing to watch out for is if something fits perfectly into your notion of how the world works,” said Kasprak. Posts that validate a person’s view of the world or evoke strong emotions in a positive or negative way are more likely to be a red flag. 

    Kasprak advises users to check their biases when reading potential copypasta content; if something makes you angry or sad, double-check its source and legitimacy. 

    “Pause if you feel strongly about wanting to share something, because those posts are the ones where the risk of copypasta is higher,” said Kasprak. When he comes across a post he believes to be copypasta, Kasprak says that he tries to “tear apart” the argument, primarily if it supports his beliefs, until it dissolves. 

    “Check your blind spots and be vigilant in checking your work,” said Kasprak. 

    When in doubt, don’t share.


     

    Questions to consider:

    1. What is meant by “copypasta”?

    2. How can something false become part of commonly believed?

    3. Can you remember the last thing you reposted on social media? What kind of things do you share with your network?


     

    Source link

  • Is social media turning our hearts to stone?

    Is social media turning our hearts to stone?

    As global digital participation grows, our ability to connect emotionally may be shifting. Social media has connected people across continents, but it also reshapes how we perceive and respond to others’ emotions, especially among youth. 

    Empathy is the ability to understand and share another’s feelings, helping to build connections and support. It’s about stepping into someone else’s shoes, listening and making them feel understood.

    While platforms like Instagram, TikTok and X offer tools for global connection, they may also be changing the way we experience empathy.

    Social media’s strength lies in its speed and reach. Instant sharing allows users to engage with people from different backgrounds, participate in global conversations and discover social causes. But it also comes with downsides. 

    “People aren’t doing research for themselves,” says Marc Scott, the diversity, equity and community coordinator at the Tatnall School, the private high school that I attend in the U.S. state of Delaware. “They see one thing and take it for fact.”

    Communicating in a two-dimensional world

    That kind of surface-level engagement can harm emotional understanding. The lack of facial expressions, body language and tone — key elements of in-person conversation — makes it harder to gauge emotion online. This often leads to misunderstandings, or worse, emotional detachment.

    In a world where users often post only curated highlights, online personas may appear more polished than real life. “Someone can have a large following,” Scott said. “But that’s just one person. They don’t represent the whole group.” 

    Tijen Pyle teaches advanced placement psychology at the Tatnall School. He pointed out how social media can amplify global polarization. 

    “When you’re in a group with similar ideas, you tend to feel stronger about those opinions,” he said. “Social media algorithms cater your content to your interests and you only see what you agree with.” 

    This selective exposure limits empathy by reducing understanding of differing perspectives. The disconnect can reinforce stereotypes and limit meaningful emotional connection.

    Over exposure to media

    Compounding the problem is “compassion fatigue” — when constant exposure to suffering online dulls our emotional response. Videos of crisis after crisis can overwhelm users, turning tragedy into background noise in an endless scroll.

    A widely cited study published in the journal Psychiatric Science in 2013 examined the effects of exposure to media related to the 9/11 attacks and the Iraq War. The study led by Roxanne Cohen Silver, found that vicariously experienced events, such as watching graphic media images, can lead to collective trauma.

    Yet not all emotional connection is lost. Online spaces have also created powerful support systems — from mental health communities to social justice movements. These spaces offer users a chance to share personal stories, uplift one another and build solidarity across borders. “It depends on how you use it,” Scott said.

    Many experts agree that digital empathy must be cultivated intentionally. According to a 2025 Pew Research Center study, nearly half of U.S. teens believe that social media platforms have a mostly negative effect on people their age, a significant increase from 32% in 2022. This growing concern underscores the complex nature of online interactions, where the potential for connection coexists with the risk of unkindness and emotional detachment. ​

    So how do we preserve empathy in a digital world? It starts with awareness. Engaging critically with content, seeking out diverse viewpoints and taking breaks from the algorithm can help. “Social media can expand your perspectives — but it can also trap you in a single mindset,” Scott said. 

    I initially started thinking about this topic when I was having the same conversations with different people and feeling a sense of ignorance. It wasn’t that they didn’t care — it was like they didn’t know how to care. 

    The way they responded to serious topics felt cold or disconnected, almost like they were watching a video instead of talking to a real person. 

    That made me wonder: has social media changed the way we understand and react to emotions?

    Ultimately, social media isn’t inherently good or bad for empathy. It’s a tool. And like any tool, its impact depends on how we use it. If we use it thoughtfully, we can ensure empathy continues to grow, even in a world dominated by screens.


    Questions to consider:

    1. What is empathy and why is it important?

    2. How can too much time spent on social media dull our emotional response?

    2. How do you know if you have spent too much time on social media? 


     

    Source link