Tag: treating

  • OPINION: Colleges must start treating immigration-based targeting as a serious threat to student safety and belonging  

    OPINION: Colleges must start treating immigration-based targeting as a serious threat to student safety and belonging  

    by Madison Forde, The Hechinger Report
    January 12, 2026

    Last month, a Boston University junior proudly posted online that he had spent months calling Immigration and Customs Enforcement to report Latino workers at a neighborhood car wash.

    Nine people were detained, including siblings and a 67-year-old man who has lived in the U.S. for decades. The student celebrated the arrests and told ICE to “pump up the numbers.”

    As the daughter of Caribbean immigrants and a researcher who studies immigrant-origin youth, I was shaken but not surprised. This incident, which did have some backlash, revealed a growing problem on college campuses: Many young people are learning to police one another rather than learn alongside one another.

    That means the new border patrol could be your classmate. Our schools are not prepared for this.

    That is why colleges must start treating immigration-based targeting as a serious threat to student safety and belonging and take immediate steps to prevent it — as they do with racism, antisemitism and homophobia.

    Related: Interested in innovations in higher education? Subscribe to our free biweekly higher education newsletter.

    The incident at Boston University is bigger than one student with extreme views. We are living in a moment shaped by online outrage, anonymous tip lines and a culture that encourages reporting anyone who seems “suspicious.”

    In this environment, some young people have started to believe that calling ICE is a form of civic duty.

    That thinking doesn’t stay online. It walks right into classrooms, dorms and group projects. When it does, the impact is not abstract. It is deeply personal for the immigrant-origin youth sitting in those same rooms.

    Many of these students grew up with fear woven into their daily lives. Their neighbors disappeared overnight, they heard stories of parents being detained at work and they began translating legal mail before they were old enough to drive. They know exactly what an ICE call can set into motion. They carry that fear with them to school.

    These are not hypothetical harms. They show up in everyday decisions: where to sit, what to say, whom to trust. I’ve met students who avoid speaking Spanish on campus, refuse to share their address during class activities and sit near the exits because they’re not sure who views their family as “a threat.” It is not possible to learn well in an environment where you do not feel safe.

    There is a strong body of developmental research highlighting belonging and social inclusion as central to healthy development. In her work on migration and acculturation, Carola Suárez-Orozco shows that legal-status-based distinctions among youth intensify exclusion and undermine both social integration and developmental well-being.

    When belonging erodes, colleges begin to function like small border zones, where everyone is quietly assessing who might turn them in. It is nearly impossible for any campus community to thrive under that kind of pressure.

    Quite frankly, nor can America’s democracy.

    If we raise a generation of students who feel compelled to police the nation’s borders from their dorms, the immigrant-origin youth sitting beside them in classrooms will carry the psychological burden of those borders every single day. Yet colleges are almost entirely unprepared for this reality.

    Most universities have clear policies for racial slurs, antisemitic threats, homophobic harassment and other identity-based harms. But very few have policies that address immigration-based targeting, even though the consequences can be just as severe and, in some cases, life-altering.

    Boston University’s president acknowledged the distress caused by that student’s actions. Yet, the university did not classify the behavior as discriminatory, despite the fact that his calls targeted a specific ethnic and immigration-status group. That silence sends a clear message: Harm against immigrant communities is unimportant, incidental or simply “political.” But this harm is neither political nor the price of free expression or civic engagement; it is targeted intimidation, with real and measurable consequences for students’ safety, mental health and academic engagement.

    In my view, colleges need to take three straightforward steps:

    1. Define immigration-based harassment as misconduct. Calling ICE on classmates, doxxing immigrant peers or circulating immigration-related rumors should be classified under the same conduct codes that protect students from other forms of targeted harm. Schools know how to do this; they simply have not applied those same protections to immigrant communities.

    2. Train faculty and staff on how to respond. Professors should have a clear understanding of what to do when immigration rhetoric is weaponized in the classroom, or when students express fear about being reported. Although many professors want to help, they may lack basic guidance.

    3. Teach immigration literacy as part of civic education. Most students do not understand what ICE detention entails, how long legal cases can drag on or what it means to live with daily fear like their immigrant peers. Teaching these realities isn’t “political indoctrination,” it is preparation for a life in a multicultural democracy.

    These three steps are not radical. They are merely the same kinds of protections colleges already provide to students targeted for other aspects of their identity.

    Related: STUDENT VOICES: ‘Dreamers’ like us need our own resource centers on college campuses

    The Boston University case is a warning, not an isolated moment. If campuses fail to respond, more young people will internalize the idea that policing their peers is simply part of student life. Immigrant-origin youth, who have done nothing wrong, will carry the emotional burden alone.

    As students, educators and researchers, we have to decide what kind of learning communities we want to build and sustain. Schools can be places where students understand one another, or they can become places of intense surveillance. That choice will shape not just campus climates, but also the society current students will eventually lead.

    Madison Forde is a doctoral student in the Clinical/Counseling Psychology program at New York University.

    Contact the opinion editor at [email protected].

    This story about immigration-based targeting at colleges was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Hechinger’s weekly newsletter.

    This <a target=”_blank” href=”https://hechingerreport.org/opinion-colleges-must-start-treating-immigration-based-targeting-as-a-serious-threat-to-student-safety-and-belonging/”>article</a> first appeared on <a target=”_blank” href=”https://hechingerreport.org”>The Hechinger Report</a> and is republished here under a <a target=”_blank” href=”https://creativecommons.org/licenses/by-nc-nd/4.0/”>Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src=”https://i0.wp.com/hechingerreport.org/wp-content/uploads/2018/06/cropped-favicon.jpg?fit=150%2C150&amp;ssl=1″ style=”width:1em;height:1em;margin-left:10px;”>

    <img id=”republication-tracker-tool-source” src=”https://hechingerreport.org/?republication-pixel=true&post=114272&amp;ga4=G-03KPHXDF3H” style=”width:1px;height:1px;”><script> PARSELY = { autotrack: false, onload: function() { PARSELY.beacon.trackPageView({ url: “https://hechingerreport.org/opinion-colleges-must-start-treating-immigration-based-targeting-as-a-serious-threat-to-student-safety-and-belonging/”, urlref: window.location.href }); } } </script> <script id=”parsely-cfg” src=”//cdn.parsely.com/keys/hechingerreport.org/p.js”></script>

    Source link

  • The case for treating adults as adults when it comes to AI chatbots

    The case for treating adults as adults when it comes to AI chatbots

    For many people, artificial intelligence chatbots make daily life more efficient. AI can manage calendars, compose messages, and provide quick answers to all kinds of questions. People interact with AI chatbots to share thoughts, test ideas, and explore language. This technology, in various ways, is playing a larger and larger role in how we think, work, and express ourselves. 

    But not all the news is good, and some people want to use the law to crack down on AI.

    Recent news reports describe a wave of lawsuits alleging that OpenAI’s generative AI chatbot, ChatGPT, caused adult users psychological distress. The filings reportedly seek monetary damages for people who conversed at length with a chatbot’s simulated persona and reported experiencing delusions and emotional trauma. In one reported case, a man became convinced that ChatGPT was sentient and later took his own life. 

    These situations are tragic and call for genuine compassion. Unfortunately, if these lawsuits succeed, they’ll effectively impose an unworkable expectation on anyone creating a chatbot to scrub anything that could trigger its most vulnerable users. Everyone, even fully capable adults, would be effectively treated as if they are on suicide watch. That’s a standard that would chill open discourse.

    Adults are neither impervious to nor helpless before AI’s influence on their lives and minds, but treating them like minors is not the solution.

    Like the printing press, the telegraph, and the internet before it, artificial intelligence is an expressive tool. A prompt, an instruction, or even a casual question reflects a user’s intent and expressive choice. A constant across its many uses is human agency — because it is ultimately a person that ends up deciding what to ask, what responses to keep, what results to share, and how to use the material it develops. Just like the communicative technologies of the past, AI has the potential to amplify human speech rather than replace it, bringing more storytellers, perspectives, and critiques with it. 

    Every new expressive medium in its time has faced public scrutiny and renewed calls for government intervention. After the famous 1938 Orson Welles’ “War of the Worlds” radio broadcast about a fictional alien invasion, for example, the Federal Communications Commission received hundreds of complaints urging the government to step in. Many letters expressed fear that this technology can deceive and destabilize people. Despite the panic, neither the broadcaster nor Welles, who went on to cinematic fame, faced any formal consequences. As time went on, the dire predictions never materialized.

    Early panic rarely aligns with long-term reality. Much of what once seemed threatening eventually found its place in civic life, revolutionizing our ability to communicate and connect. This includes radio dramas, comic books, TV, and the early web. 

    The attorneys filing lawsuits against these AI companies argue that AI is a product, and if a product predictably causes harm, safeguards are expected, even for adults. But when the “product” is speech, that expectation meets real constitutional limits. Even when harm seemed foreseeable, courts have long refused to hold speakers liable for the psychological effects of their speech on people that choose to engage with it. For example, composing rap lyrics or televising reports of violence can’t get you sued for the effects of listening or viewing them, even if they trigger people to act out.

    This principle is necessary to protect free expression. Penalizing people for the emotional or psychological impact of their speech invites the government to police the ideas, too. Recent developments in the UK shows how this can play out. Under laws that criminalize speech causing “alarm or distress,” people in England and Wales can be fined, aggressively prosecuted, or both, based entirely on the state’s claimed authority to measure the emotional “impact” of what was said. That’s not a model we should import. 

    A legal framework worthy of a free society should reflect confidence in adults’ ability to pursue knowledge without government intrusion, and this includes the use of AI tools. Extending child-safety laws or similar liability standards to adult conversations with AI would erode that freedom.

    Government AI regulation could censor protected speech online

    A Texas teen’s AI deepfake ordeal inspired the Take It Down Act — but its vague wording risks sweeping censorship.


    Read More

    The same constitutional protections apply when adults interact with speech, even speech generated by AI. That’s because the First Amendment ensures that we meet challenging, misleading, or even false ideas with more speech rather than censorship. More education and debate are the best means to preserve adults’ ability to judge ideas for themselves. It also prevents the state from deciding which messages are too dangerous for people to hear — a power that, if granted, can and will almost certainly be abused and misused. This is the same principle that secures Americans’ right to read subversive books, hear controversial figures speak, and engage with ideas that offend others.

    Regulating adult conversations with AI blurs the line between a government that serves its citizens and one that supervises them. Adulthood presumes the capacity for judgment, including the freedom to err. Being mistaken or misguided is all part of what it means to think and speak for oneself.

    At FIRE, we see this dynamic play out daily on college campuses. These institutions of higher education are meant to prepare young adults for citizenship and self-governance, but instead they often treat students as if discomfort and disagreement are radioactive. Speech codes and restrictions on protests, justified as shields against harm, teach dependence on authority and distrust of one’s own resilience. That same impulse is now being echoed in calls for AI chatbot regulation.

    Yes, words can do harm, even in adulthood. Still, not every harm can be addressed in court or by lawmakers, especially not if it means restricting free expression. Adults are neither impervious to nor helpless before AI’s influence on their lives and minds, but treating them like minors is not the solution.

    Source link