Tag: social

  • Octopus researcher Meg Mindlin on science art and video for social media

    Octopus researcher Meg Mindlin on science art and video for social media

    What’s it like to be an artist and scientist? Meg Mindlin studies octopuses, shares videos for Instagram Reels and TikTok. And, she’s a talented artist who helps people communicate science in engaging way. I felt lucky to attend her thesis defense live on YouTube.

    In this conversation, we talk about her research, dealing with the political spectrum when speaking up on social media, and sharing her art online.

    Meg Mindlin (@invertebabe) is a molecular biologist and science communicator. She combines her background in art with an ability to communicate complex science in an engaging manner. She received her Masters in Biology studying octopuses and how ocean acidification effects a molecular process known as RNA editing.

    Meg Mindlin sits on a desk at the front of a lecture hall. She's just defended her master's thesis, titled Tickled Zinc. On the screen behind her is a beautiful title slide for her research presentation which features original art.

    Source link

  • How do we gain and measure social licence? – Campus Review

    How do we gain and measure social licence? – Campus Review

    Podcasts

    Please login below to view content or subscribe now.

    Membership Login

    Source link

  • What really shapes the future of AI in education?

    What really shapes the future of AI in education?

    This post originally appeared on the Christensen Institute’s blog and is reposted here with permission.

    Key points:

    A few weeks ago, MIT’s Media Lab put out a study on how AI affects the brain. The study ignited a firestorm of posts and comments on social media, given its provocative finding that students who relied on ChatGPT for writing tasks showed lower brain engagement on EEG scans, hinting that offloading thinking to AI can literally dull our neural activity. For anyone who has used AI, it’s not hard to see how AI systems can become learning crutches that encourage mental laziness.

    But I don’t think a simple “AI harms learning” conclusion tells the whole story. In this blog post (adapted from a recent series of posts I shared on LinkedIn), I want to add to the conversation by tackling the potential impact of AI in education from four angles. I’ll explore how AI’s unique adaptability can reshape rigid systems, how it both fights and fuels misinformation, how AI can be both good and bad depending on how it is used, and why its funding model may ultimately determine whether AI serves learners or short-circuits their growth.

    What if the most transformative aspect of AI for schools isn’t its intelligence, but its adaptability?

    Most technologies make us adjust to them. We have to learn how they work and adapt our behavior. Industrial machines, enterprise software, even a basic thermostat—they all come with instructions and patterns we need to learn and follow.

    Education highlights this dynamic in a different way. How does education’s “factory model” work when students don’t come to school as standardized raw inputs? In many ways, schools expect students to conform to the requirements of the system—show up on time, sharpen your pencil before class, sit quietly while the teacher is talking, raise your hand if you want to speak. Those social norms are expectations we place on students so that standardized education can work. But as anyone who has tried to manage a group of six-year-olds knows, a class of students is full of complicated humans who never fully conform to what the system expects. So, teachers serve as the malleable middle layer. They adapt standardized systems to make them work for real students. Without that human adaptability, the system would collapse.

    Same thing in manufacturing. Edgar Schein notes that engineers aim to design systems that run themselves. But operators know systems never work perfectly. Their job—and often their sense of professional identity—is about having the expertise to adapt and adjust when things inevitably go off-script. Human adaptability in the face of rigid systems keeps everything running.

    So, how does this relate to AI? AI breaks the mold of most machines and systems humans have designed and dealt with throughout history. It doesn’t just follow its algorithm and expect us to learn how to use it. It adapts to us, like how teachers or factory operators adapt to the realities of the world to compensate for the rigidity of standardized systems.

    You don’t need a coding background or a manual. You just speak to it. (I literally hit the voice-to-text button and talk to it like I’m explaining something to a person.) Messy, natural human language—the age-old human-to-human interface that our brains are wired to pick up on as infants—has become the interface for large language models. In other words, what makes today’s AI models amazing is their ability to use our interface, rather than asking us to learn theirs.

    For me, the early hype about “prompt engineering” never really made sense. It assumed that success with AI required becoming an AI whisperer who knew how to speak AI’s language. But in my experience, working well with AI is less about learning special ways to talk to AI and more about just being a clear communicator, just like a good teacher or a good manager.

    Now imagine this: what if AI becomes the new malleable middle layer across all kinds of systems? Not just a tool, but an adaptive bridge that makes other rigid, standardized systems work well together. If AI can make interoperability nearly frictionless—adapting to each system and context, rather than forcing people to adapt to it—that could be transformative. It’s not hard to see how this shift might ripple far beyond technology into how we organize institutions, deliver services, and design learning experiences.

    Consider two concrete examples of how this might transform schools. First, our current system heavily relies on the written word as the medium for assessing students’ learning. To be clear, writing is an important skill that students need to develop to help them navigate the world beyond school. Yet at the same time, schools’ heavy reliance on writing as the medium for demonstrating learning creates barriers for students with learning disabilities, neurodivergent learners, or English language learners—all of whom may have a deep understanding but struggle to express it through writing in English. AI could serve as that adaptive layer, allowing students to demonstrate their knowledge and receive feedback through speech, visual representations, or even their native language, while still ensuring rigorous assessment of their actual understanding.

    Second, it’s obvious that students don’t all learn at the same pace—yet we’ve forced learning to happen at a uniform timeline because individualized pacing quickly becomes completely unmanageable when teachers are on their own to cover material and provide feedback to their students. So instead, everyone spends the same number of weeks on each unit of content and then moves to the next course or grade level together, regardless of individual readiness. Here again, AI could serve as that adaptive layer for keeping track of students’ individual learning progressions and then serving up customized feedback, explanations, and practice opportunities based on students’ individual needs.

    Third, success in school isn’t just about academics—it’s about knowing how to navigate the system itself. Students need to know how to approach teachers for help, track announcements for tryouts and auditions, fill out paperwork for course selections, and advocate for themselves to get into the classes they want. These navigation skills become even more critical for college applications and financial aid. But there are huge inequities here because much of this knowledge comes from social capital—having parents or peers who already understand how the system works. AI could help level the playing field by serving as that adaptive coaching layer, guiding any student through the bureaucratic maze rather than expecting them to figure it out on their own or rely on family connections to decode the system.

    Can AI help solve the problem of misinformation?

    Most people I talk to are skeptical of the idea in this subhead—and understandably so.

    We’ve all seen the headlines: deep fakes, hallucinated facts, bots that churn out clickbait. AI, many argue, will supercharge misinformation, not solve it. Others worry that overreliance on AI could make people less critical and more passive, outsourcing their thinking instead of sharpening it.

    But what if that’s not the whole story?

    Here’s what gives me hope: AI’s ability to spot falsehoods and surface truth at scale might be one of its most powerful—and underappreciated—capabilities.

    First, consider what makes misinformation so destructive. It’s not just that people believe wrong facts. It’s that people build vastly different mental models of what’s true and real. They lose any shared basis for reasoning through disagreements. Once that happens, dialogue breaks down. Facts don’t matter because facts aren’t shared.

    Traditionally, countering misinformation has required human judgment and painstaking research, both time-consuming and limited in scale. But AI changes the equation.

    Unlike any single person, a large language model (LLM) can draw from an enormous base of facts, concepts, and contextual knowledge. LLMs know far more facts from their training data than any person can learn in a lifetime. And when paired with tools like a web browser or citation database, they can investigate claims, check sources, and explain discrepancies.

    Imagine reading a social media post and getting a sidebar summary—courtesy of AI—that flags misleading statistics, offers missing context, and links to credible sources. Not months later, not buried in the comments—instantly, as the content appears. The technology to do this already exists.

    Of course, AI is not perfect as a fact-checker. When large language models generate text, they aren’t producing precise queries of facts; they’re making probabilistic guesses at what the right response should be based on their training, and sometimes those guesses are wrong. (Just like human experts, they also generate answers by drawing on their expertise, and they sometimes get things wrong.) AI also has its own blind spots and biases based on the biases it inherits from its training data. 

    But in many ways, both hallucinations and biases in AI are easier to detect and address than the false statements and biases that come from millions of human minds across the internet. AI’s decision rules can be audited. Its output can be tested. Its propensity to hallucinate can be curtailed. That makes it a promising foundation for improving trust, at least compared to the murky, decentralized mess of misinformation we’re living in now.

    This doesn’t mean AI will eliminate misinformation. But it could dramatically increase the accessibility of accurate information, and reduce the friction it takes to verify what’s true. Of course, most platforms don’t yet include built-in AI fact-checking, and even if they did, that approach would raise important concerns. Do we trust the sources that those companies prioritize? The rules their systems follow? The incentives that guide how their tools are designed? But beyond questions of trust, there’s a deeper concern: when AI passively flags errors or supplies corrections, it risks turning users into passive recipients of “answers” rather than active seekers of truth. Learning requires effort. It’s not just about having the right information—it’s about asking good questions, thinking critically, and grappling with ideas. That’s why I think one of the most important things to teach young people about how to use AI is to treat it as a tool for interrogating the information and ideas they encounter, both online and from AI itself. Just like we teach students to proofread their writing or double-check their math, we should help them develop habits of mind that use AI to spark their own inquiry—to question claims, explore perspectives, and dig deeper into the truth. 

    Still, this focuses on just one side of the story. As powerful as AI may be for fact-checking, it will inevitably be used to generate deepfakes and spin persuasive falsehoods.

    AI isn’t just good or bad—it’s both. The future of education depends on how we use it.

    Much of the commentary around AI takes a strong stance: either it’s an incredible force for progress or it’s a terrifying threat to humanity. These bold perspectives make for compelling headlines and persuasive arguments. But in reality, the world is messy. And most transformative innovations—AI included—cut both ways.

    History is full of examples of technologies that have advanced society in profound ways while also creating new risks and challenges. The Industrial Revolution made it possible to mass-produce goods that have dramatically improved the quality of life for billions. It has also fueled pollution and environmental degradation. The internet connects communities, opens access to knowledge, and accelerates scientific progress—but it also fuels misinformation, addiction, and division. Nuclear energy can power cities—or obliterate them.

    AI is no different. It will do amazing things. It will do terrible things. The question isn’t whether AI will be good or bad for humanity—it’s how the choices of its users and developers will determine the directions it takes. 

    Because I work in education, I’ve been especially focused on the impact of AI on learning. AI can make learning more engaging, more personalized, and more accessible. It can explain concepts in multiple ways, adapt to your level, provide feedback, generate practice exercises, or summarize key points. It’s like having a teaching assistant on demand to accelerate your learning.

    But it can also short-circuit the learning process. Why wrestle with a hard problem when AI will just give you the answer? Why wrestle with an idea when you can ask AI to write the essay for you? And even when students have every intention of learning, AI can create the illusion of learning while leaving understanding shallow.

    This double-edged dynamic isn’t limited to learning. It’s also apparent in the world of work. AI is already making it easier for individuals to take on entrepreneurial projects that would have previously required whole teams. A startup no longer needs to hire a designer to create its logo, a marketer to build its brand assets, or an editor to write its press releases. In the near future, you may not even need to know how to code to build a software product. AI can help individuals turn ideas into action with far fewer barriers. And for those who feel overwhelmed by the idea of starting something new, AI can coach them through it, step by step. We may be on the front end of a boom in entrepreneurship unlocked by AI.

    At the same time, however, AI is displacing many of the entry-level knowledge jobs that people have historically relied on to get their careers started. Tasks like drafting memos, doing basic research, or managing spreadsheets—once done by junior staff—can increasingly be handled by AI. That shift is making it harder for new graduates to break into the workforce and develop their skills on the job.

    One way to mitigate these challenges is to build AI tools that are designed to support learning, not circumvent it. For example, Khan Academy’s Khanmigo helps students think critically about the material they’re learning rather than just giving them answers. It encourages ideation, offers feedback, and prompts deeper understanding—serving as a thoughtful coach, not a shortcut. But the deeper issue AI brings into focus is that our education system often treats learning as a means to an end—a set of hoops to jump through on the way to a diploma. To truly prepare students for a world shaped by AI, we need to rethink that approach. First, we should focus less on teaching only the skills AI can already do well. And second, we should make learning more about pursuing goals students care about—goals that require curiosity, critical thinking, and perseverance. Rather than training students to follow a prescribed path, we should be helping them learn how to chart their own. That’s especially important in a world where career paths are becoming less predictable, and opportunities often require the kind of initiative and adaptability we associate with entrepreneurs.

    In short, AI is just the latest technological double-edged sword. It can support learning, or short-circuit it. Boost entrepreneurship—or displace entry-level jobs. The key isn’t to declare AI good or bad, but to recognize that it’s both, and then to be intentional about how we shape its trajectory. 

    That trajectory won’t be determined by technical capabilities alone. Who pays for AI, and what they pay it to do, will influence whether it evolves to support human learning, expertise, and connection, or to exploit our attention, take our jobs, and replace our relationships.

    What actually determines whether AI helps or harms?

    When people talk about the opportunities and risks of artificial intelligence, the conversation tends to focus on the technology’s capabilities—what it might be able to do, what it might replace, what breakthroughs lie ahead. But just focusing on what the technology does—both good and bad—doesn’t tell the whole story. The business model behind a technology influences how it evolves.

    For example, when advertisers are the paying customer, as they are for many social media platforms, products tend to evolve to maximize user engagement and time-on-platform. That’s how we ended up with doomscrolling—endless content feeds optimized to occupy our attention so companies can show us more ads, often at the expense of our well-being.

    That incentive could be particularly dangerous with AI. If you combine superhuman persuasion tools with an incentive to monopolize users’ attention, the results will be deeply manipulative. And this gets at a concern my colleague Julia Freeland Fisher has been raising: What happens if AI systems start to displace human connection? If AI becomes your go-to for friendship or emotional support, it risks crowding out the real relationships in your life.

    Whether or not AI ends up undermining human relationships depends a lot on how it’s paid for. An AI built to hold your attention and keep you coming back might try to be your best friend. But an AI built to help you solve problems in the real world will behave differently. That kind of AI might say, “Hey, we’ve been talking for a while—why not go try out some of the things we’ve discussed?” or “Sounds like it’s time to take a break and connect with someone you care about.”

    Some decisions made by the major AI companies seem encouraging. Sam Altman, OpenAI’s CEO, has said that adopting ads would be a last resort. “I’m not saying OpenAI would never consider ads, but I don’t like them in general, and I think that ads-plus-AI is sort of uniquely unsettling to me.” Instead, most AI developers like OpenAI and Anthropic have turned to user subscriptions, an incentive structure that doesn’t steer as hard toward addictiveness. OpenAI is also exploring AI-centric hardware as a business model—another experiment that seems more promising for user wellbeing.

    So far, we’ve been talking about the directions AI will take as companies develop their technologies for individual consumers, but there’s another angle worth considering: how AI gets adopted into the workplace. One of the big concerns is that AI will be used to replace people, not necessarily because it does the job better, but because it’s cheaper. That decision often comes down to incentives. Right now, businesses pay a lot in payroll taxes and benefits for every employee, but they get tax breaks when they invest in software and machines. So, from a purely financial standpoint, replacing people with technology can look like a smart move. In the book, The Once and Future Worker, Oren Cass discusses this problem and suggests flipping that script—taxing capital more and labor less—so companies aren’t nudged toward cutting jobs just to save money. That change wouldn’t stop companies from using AI, but it would encourage them to deploy it in ways that complement, rather than replace, human workers.

    Currently, while AI companies operate without sustainable business models, they’re buoyed by investor funding. Investors are willing to bankroll companies with little or no revenue today because they see the potential for massive profits in the future. But that investor model creates pressure to grow rapidly and acquire as many users as possible, since scale is often a key metric of success in venture-backed tech. That drive for rapid growth can push companies to prioritize user acquisition over thoughtful product development, potentially at the expense of safety, ethics, or long-term consequences. 

    Given these realities, what can parents and educators do? First, they can be discerning customers. There are many AI tools available, and the choices they make matter. Rather than simply opting for what’s most entertaining or immediately useful, they can support companies whose business models and design choices reflect a concern for users’ well-being and societal impact.

    Second, they can be vocal. Journalists, educators, and parents all have platforms—whether formal or informal—to raise questions, share concerns, and express what they hope to see from AI companies. Public dialogue helps shape media narratives, which in turn shape both market forces and policy decisions.

    Third, they can advocate for smart, balanced regulation. As I noted above, AI shouldn’t be regulated as if it’s either all good or all bad. But reasonable guardrails can ensure that AI is developed and used in ways that serve the public good. Just as the customers and investors in a company’s value network influence its priorities, so too can policymakers play a constructive role as value network actors by creating smart policies that promote general welfare when market incentives fall short.

    In sum, a company’s value network—who its investors are, who pays for its products, and what they hire those products to do—determines what companies optimize for. And in AI, that choice might shape not just how the technology evolves, but how it impacts our lives, our relationships, and our society.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • More comprehensive EDI data makes for a clearer picture of staff social mobility

    More comprehensive EDI data makes for a clearer picture of staff social mobility

    Asking more granular EDI questions of its PGRs and staff should be a sector priority. It would enable universities to assess the diversity of their academic populations in the same manner they have done for our undergraduate bodies – but with the addition of a valuable socio-economic lens.

    It would equip us more effectively to answer basic questions regarding how far the diversity in our undergraduate community leads through to our PGT, PGR and academic populations, as well as see where ethnicity and gender intersect with socio-economic status and caring responsibilities to contribute to individuals falling out of (or choosing to leave) the “leaky” academic pipeline.

    One tool to achieve this is the Diversity and Inclusion Survey (DAISY), a creation of Equality, Diversity and Inclusion in Science and Health (EDIS) and the Wellcome Trust. This toolkit outlines how funders and universities can collect more detailed diversity monitoring data of their staff and PGRs as well as individuals involved in research projects.

    DAISY suggests questions regarding socio-economic background and caring responsibilities that nuance or expand upon those already in “equal opportunities”-type application forms that exist in the sector. DAISY asks, for example, whether one has children and/or adult dependents, and how many of each, rather than the usual “yes” or “no” to “do you have caring responsibilities?” Other questions include the occupation of your main household earner when aged 14 (with the option to pick from categories of job type), whether your parents attended university before you were 18, and whether you qualified for free school meals at the age of 14.

    EDI data journeys across the sector

    As part of an evolving data strategy, UCAS already collects several DAISY data points on their applicants, such as school type and eligibility for free school meals, with the latter data point is gaining traction across the university sector and policy bodies as a meaningful indicator for disadvantage.

    Funders are interested in collecting more granular EDI data. The National Institute for Health and Care Research (NIHR), for example, invested around £800 million in the creation of Biomedical Research Centres in the early 2020s. The NIHR encouraged the collection of DAISY data specifically on both the researchers each centre would employ and the individuals they would research upon, in the belief (see theme four of their research inclusion strategy) that a diverse researcher workforce will make medical science more robust.

    The diversity monitoring templates attached to recent UKRI funding schemes similarly highlight the sector’s desire for more granular EDI data. UKRI’s Responsive Mode Scheme, for example, requires institutions to benchmark their applicants against a range of protected characteristics, including ethnicity, gender, and disability, set against the percentage of the “researcher population” at the institution holding those characteristics. The direction of travel in the sector is clear.

    What can universities do?

    Given the data journeys of UCAS and funding bodies, it is sensible and proportionate, therefore, that universities ask more granular EDI questions of their PGRs and their staff. Queen Mary began doing so, using the DAISY toolkit as guide, for its staff and PGRs in October 2024, alongside work to capture similar demographic data in the patient population involved in clinical trials supported by Queen Mary and Barts NHS Health Trust.

    While we have excellent diversity in our undergraduate community, we see less in our PGR and staff communities, and embedding more granular data collection into our central HR processes for staff and admissions processes for PGRs allows us to assess (eventually, at least, given adequate disclosure rates) how far the diversity in our undergraduate population leads through to our PGT, PGR and academic population.

    Embedding the collection of more granular EDI data into central HR and admissions systems required collaboration across Queen Mary’s Research Culture, EDI, and HR teams, creating new information forms and systems to collect the data while ensuring it could be linked to other datasets. The process was also quickened by a clinical trials unit in our Faculty of Medicine & Dentistry who had piloted the collection of this data already on a smaller scale, providing a proof of concept for our colleagues in HR.

    EDI data and the PGR pipeline

    Securing the cooperation of our HR and EDI colleagues was made easier thanks to our doctoral college, who had already incorporated the collection of more granular EDI data into an initiative aimed at increasing the representation of Black British students in our PGR community: the STRIDE programme.

    Standing for “Summer Training Research Initiative to Support Diversity and Equity”, STRIDE gives our BAME undergraduate students the opportunity to undertake an eight-week paid research project over the summer, alongside a weekly soft skills programme including presentation and leadership training. Although the programme has run annually since 2020 with excellent outcomes (almost 70 per cent of the first cohort successfully applied to funded research programmes), incorporating more granular EDI questions into the application form for the 2024 cohort of 425 applicants highlighted intersectional barriers to postgraduate study faced by our applicants that would have been obscured had we only collected basic EDI data.

    Among other insights, 47 per cent of applicants to STRIDE had been eligible at some point for free school meals. This contrasts with our broader undergraduate community, 22 per cent of whom were eligible for free school meals. Some 55 per cent of applicants reported that neither of their parents went to university, and 27 per cent reported that their parents had routine or semi-routine manual jobs. Asking questions beyond the usual suite of EDI questions allows us here to picture more clearly the socio-economic and cultural barriers that intersect with ethnicity to make entry into postgraduate study more difficult for members of underrepresented communities.

    The data chimed with internal research we conducted in 2021, where we discovered that many of the key barriers to our undergraduates engaging in postgraduate research were the same as those who were first in family to go to university, namely lack of family understanding of a further degree and lack of understanding regarding the financial benefits of completing a postgraduate research degree.

    Collecting more granular EDI data will allow us to understand and support diversity that is intersectional, while enabling more effective assessment of whether Queen Mary is moving in the right direction in terms of making research degrees (and research careers) accessible to traditionally underrepresented communities at our universities. But collecting such data on our STRIDE applicants makes little sense without equivalent data from our PGR and academic community – hence Queen Mary’s broader decision to embed DAISY data collection into its systems.

    The potential of DAISY

    As Queen Mary’s experience with STRIDE demonstrates, nuancing our collection of EDI data comes with clear potential. Given adequate disclosure rates, collecting more granular EDI data makes possible more effective intersectional analyses of our PGRs and staff across our sector, and helps understand the social mobility of our PGRs and staff with more nuance, leading to a clearer image of the journey that those from less privileged social backgrounds and/or those with caring responsibilities face across our sector.

    More broadly, universities will always be crucial catalysts of social mobility, and collecting more granular data on socio-economic background alongside the personal data they already collect – such as gender, ethnicity, religion and other protected characteristics – is a logical and necessary next step.

    Source link

  • Unfriending on Social Media with Author, Sarah Layden

    Unfriending on Social Media with Author, Sarah Layden

    When Sarah Layden shared her satire piece, ‘Unfriend Me Now’ on her LinkedIn profile, I reached out right away about her appearing on The Social Academic interview series. She wrote ‘Unfriend Me Now’ after reading research from Floyd, Matheny, Dinsmore, Custer, and Woo, “If You Disagree, Unfriend Me Now”: Exploring the Phenomenon of Invited Unfriending, published in American Journal of Applied Psychology.

    Sarah Layden is the author of Imagine Your Life Like This, stories; Trip Through Your Wires, a novel; and The Story I Tell Myself About Myself, winner of the Sonder Press Chapbook Competition.

    Sarah Layden professional headshot
    Sarah Layden

    Her short fiction appears in Boston Review, Blackbird, McSweeney’s Internet Tendency, Best Microfiction 2020, and elsewhere. Her nonfiction writing appears in The Washington Post, Poets & Writers, Salon, The Millions, and River Teeth, and she is co-author with Bryan Furuness of The Invisible Art of Literary Editing.

    She is an Associate Professor of English at Indiana University Indianapolis.

    Source link

  • All that glitters is not gold: A brief history of efforts to rebrand social media censorship

    All that glitters is not gold: A brief history of efforts to rebrand social media censorship

    Whenever a bill aimed at policing online speech is accused of censorship, its supporters often reframe the conversation around subjects like child safety or consumer protection. Such framing helps obscure government attempts to shape or limit lawful speech, yet no matter how artfully labeled such measures happen to be, they inevitably run headlong into the First Amendment.

    Consider the headline-grabbing Kids Online Safety Act (KOSA). Re-introduced this year by Sens. Marsha Blackburn (R-Tennessee) and Richard Blumenthal (D-Connecticut) as a measure to protect minors, KOSA’s sponsors have repeatedly characterized its regulations as merely providing tools, safeguards, and transparency. But in practice, it would empower the federal government to put enormous pressure on platforms to censor constitutionally protected content. This risk of government censorship led KOSA to stall in the House last year after passing the Senate. 

    Child safety arguments have increasingly surfaced in states pursuing platform regulation, but closer inspection reveals that many such laws control how speech flows online, including for adults. Take Mississippi’s 2024 social media law (HB 1126), which was described as a child safety measure, that compelled platforms to verify every user’s age. Beneath that rhetoric, however, is the fact that age verification affects everyone, not just children. By forcing every user — adult or minor alike — to show personal identification or risk losing access, this law turned a child-safety gate into a universal speech checkpoint. That’s because identity checks function like a license: if you don’t clear the government’s screening, you can’t speak or listen. 

    A judge blocked HB 1126 last month, rejecting the attorney general’s argument that it only regulated actions, not speech, and finding that age verification gravely burdens how people communicate online. In other words, despite the bill’s intentions or rationales, the First Amendment was very much at stake.

    Utah’s 2023 Social Media Regulation Act demanded similar age checks that acted as a broad  mandate that chilled lawful speech. FIRE sued, the legislature repealed the statute, and its 2024 replacement — the Minor Protection in Social Media Act — met the same fate when a federal judge blocked it. Finding there was likely “no constitutionally permissible application,” the judge underscored the clear conflict between such regulations and the First Amendment. 

    Speech regulations often show up with different rationales, not just child safety. In Texas, HB 20 was marketed in 2021 as a way to stop “censorship” by large social media companies. By trying to paint the largest platforms as public utilities and treating content moderation decisions as “service features,” the legislature flipped the script on free expression by recasting a private actor’s editorial judgment as “conduct” the state could police. When the U.S. Court of Appeals for the Fifth Circuit upheld the law, in a decision that was later excoriated by the Supreme Court, the court repeated this inversion of the First Amendment: “The Platforms are not newspapers. Their censorship is not speech.” 

    Florida tried a similar strategy with a consumer-protection gloss. SB 7072 amended the state’s Deceptive and Unfair Trade Practices Act to include certain content moderation decisions, such as political de-platforming or shadow banning, exposing platforms to enforcement and penalties for their speech. Unlike the Fifth Circuit, the Eleventh Circuit blocked this law, calling platform curation “unquestionably” expressive and, therefore, protected by the First Amendment. 

    In July 2024, the Supreme Court took up the question when considering challenges to these two state laws in Moody v. NetChoice. Cutting through the branding, the Court rejected the idea that these laws merely regulated conduct or trade practices. Instead, it said content moderation decisions do have First Amendment protection and that the laws in Texas and Florida did, in fact, regulate speech. 

    The Court clarified in no uncertain terms that “a State may not interfere with private actors’ speech to advance its own vision of ideological balance.” And it added that “[o]n the spectrum of dangers to free expression, there are few greater than allowing the government to change the speech of private actors in order to achieve its own conception of speech nirvana.”

    California tried the dual framing of both child safety and consumer protection. AB 2273, the California Age Appropriate Design Code Act, was described as a child-safety bill that just regulated how apps and websites are built and structured, not their content. The bill classified digital product design features, such as autoplaying videos or default public settings, as a “material detriment” to minors as well as an unfair or deceptive act under state consumer-protection statutes. But this too failed and is now blocked because, the court noted, “the State’s intentions in enacting the CAADCA cannot insulate the Act from the requirements of the First Amendment.”

    Multiple nationwide lawsuits now claim social media feeds are defective products, using product-liability law to attack the design of platforms themselves. But by calling speech a “product” or forcing it into a product liability claim, it recharacterizes the editorial decisions of lawful content as a product flaw, which attempts to shift the legal analysis from speech protections to consumer protection. State attorneys general, however, cannot erase the First Amendment protections that still apply.

    A sound policy approach to online speech looks not at branding, but impact. Even when packaged in terms of child safety, consumer protection, or platform accountability, it is essential to ask whether the rule forces platforms to host, suppress, or reshape lawful content. Regardless of the policy goal or rhetorical framing, if a requirement ultimately pressures platforms to host or suppress lawful speech, expect judges to treat it as a speech regulation. 

    Unfortunately, re-branding speech regulations can obfuscate their censorial ends and make them politically attractive. That’s what’s happening with KOSA’s obvious appeal of protecting children, combined with the less obvious censorship threat from targeting “design features,” has made it popular in the Senate.

    Giving the government power to censor online speech puts everyone’s liberty at risk. Just as Americans enjoy the right to read, watch, and talk about whatever we want offline, those protections extend to our speech online as well. Protecting free expression now keeps the marketplace of ideas open and guards us from sacrificing everyone’s right to free expression.

    Source link

  • Could ATEC boost social license? – Campus Review

    Could ATEC boost social license? – Campus Review

    Verity Firth, chair of Engagement Australia and vice-president of societal impact at the University of NSW, joins with guest host Alphia Possamai-Inesedy, the pro-vice-chancellor of student success at Western Sydney University.

    Please login below to view content or subscribe now.

    Membership Login

    Source link

  • Podcast: Industrial strategy, cashpoint colleges, social mobility

    Podcast: Industrial strategy, cashpoint colleges, social mobility

    This week on the podcast we examine the government’s new industrial strategy and what it really means for higher education – from regional clusters and research funding to skills bootcamps and spin-out support.

    Will the plans finally integrate universities into the UK’s economic future, or is this another case of policy promises outpacing delivery?

    Plus we discuss the franchising scandal and the damning case for urgent reform, and ask whether new research on social mobility challenges the sector’s claims about access, aspiration, and advancement.

    With Katie Normington, Vice Chancellor at De Montfort University, Johnny Rich, Chief Executive at the Engineering Professors’ Council and Push, James Coe, Associate Editor at Wonkhe and presented by Mark Leach, Editor-in-Chief at Wonkhe.

    Higher education and the industrial strategy priority areas

    The cashpoint campus comeback franchising, fraud, and the failure to learn from the FE experience

    On the move: how young people’s mobility responds to and reinforces geographical inequalities

    Inequalities in Access to Professional Occupations

    Source link