Tag: chatbots

  • The case for treating adults as adults when it comes to AI chatbots

    The case for treating adults as adults when it comes to AI chatbots

    For many people, artificial intelligence chatbots make daily life more efficient. AI can manage calendars, compose messages, and provide quick answers to all kinds of questions. People interact with AI chatbots to share thoughts, test ideas, and explore language. This technology, in various ways, is playing a larger and larger role in how we think, work, and express ourselves. 

    But not all the news is good, and some people want to use the law to crack down on AI.

    Recent news reports describe a wave of lawsuits alleging that OpenAI’s generative AI chatbot, ChatGPT, caused adult users psychological distress. The filings reportedly seek monetary damages for people who conversed at length with a chatbot’s simulated persona and reported experiencing delusions and emotional trauma. In one reported case, a man became convinced that ChatGPT was sentient and later took his own life. 

    These situations are tragic and call for genuine compassion. Unfortunately, if these lawsuits succeed, they’ll effectively impose an unworkable expectation on anyone creating a chatbot to scrub anything that could trigger its most vulnerable users. Everyone, even fully capable adults, would be effectively treated as if they are on suicide watch. That’s a standard that would chill open discourse.

    Adults are neither impervious to nor helpless before AI’s influence on their lives and minds, but treating them like minors is not the solution.

    Like the printing press, the telegraph, and the internet before it, artificial intelligence is an expressive tool. A prompt, an instruction, or even a casual question reflects a user’s intent and expressive choice. A constant across its many uses is human agency — because it is ultimately a person that ends up deciding what to ask, what responses to keep, what results to share, and how to use the material it develops. Just like the communicative technologies of the past, AI has the potential to amplify human speech rather than replace it, bringing more storytellers, perspectives, and critiques with it. 

    Every new expressive medium in its time has faced public scrutiny and renewed calls for government intervention. After the famous 1938 Orson Welles’ “War of the Worlds” radio broadcast about a fictional alien invasion, for example, the Federal Communications Commission received hundreds of complaints urging the government to step in. Many letters expressed fear that this technology can deceive and destabilize people. Despite the panic, neither the broadcaster nor Welles, who went on to cinematic fame, faced any formal consequences. As time went on, the dire predictions never materialized.

    Early panic rarely aligns with long-term reality. Much of what once seemed threatening eventually found its place in civic life, revolutionizing our ability to communicate and connect. This includes radio dramas, comic books, TV, and the early web. 

    The attorneys filing lawsuits against these AI companies argue that AI is a product, and if a product predictably causes harm, safeguards are expected, even for adults. But when the “product” is speech, that expectation meets real constitutional limits. Even when harm seemed foreseeable, courts have long refused to hold speakers liable for the psychological effects of their speech on people that choose to engage with it. For example, composing rap lyrics or televising reports of violence can’t get you sued for the effects of listening or viewing them, even if they trigger people to act out.

    This principle is necessary to protect free expression. Penalizing people for the emotional or psychological impact of their speech invites the government to police the ideas, too. Recent developments in the UK shows how this can play out. Under laws that criminalize speech causing “alarm or distress,” people in England and Wales can be fined, aggressively prosecuted, or both, based entirely on the state’s claimed authority to measure the emotional “impact” of what was said. That’s not a model we should import. 

    A legal framework worthy of a free society should reflect confidence in adults’ ability to pursue knowledge without government intrusion, and this includes the use of AI tools. Extending child-safety laws or similar liability standards to adult conversations with AI would erode that freedom.

    Government AI regulation could censor protected speech online

    A Texas teen’s AI deepfake ordeal inspired the Take It Down Act — but its vague wording risks sweeping censorship.


    Read More

    The same constitutional protections apply when adults interact with speech, even speech generated by AI. That’s because the First Amendment ensures that we meet challenging, misleading, or even false ideas with more speech rather than censorship. More education and debate are the best means to preserve adults’ ability to judge ideas for themselves. It also prevents the state from deciding which messages are too dangerous for people to hear — a power that, if granted, can and will almost certainly be abused and misused. This is the same principle that secures Americans’ right to read subversive books, hear controversial figures speak, and engage with ideas that offend others.

    Regulating adult conversations with AI blurs the line between a government that serves its citizens and one that supervises them. Adulthood presumes the capacity for judgment, including the freedom to err. Being mistaken or misguided is all part of what it means to think and speak for oneself.

    At FIRE, we see this dynamic play out daily on college campuses. These institutions of higher education are meant to prepare young adults for citizenship and self-governance, but instead they often treat students as if discomfort and disagreement are radioactive. Speech codes and restrictions on protests, justified as shields against harm, teach dependence on authority and distrust of one’s own resilience. That same impulse is now being echoed in calls for AI chatbot regulation.

    Yes, words can do harm, even in adulthood. Still, not every harm can be addressed in court or by lawmakers, especially not if it means restricting free expression. Adults are neither impervious to nor helpless before AI’s influence on their lives and minds, but treating them like minors is not the solution.

    Source link

  • Students Love AI Chatbots — No, Really – The 74

    Students Love AI Chatbots — No, Really – The 74

    School (in)Security is our biweekly briefing on the latest school safety news, vetted by Mark KeierleberSubscribe here.

    The robots have taken over.

    New research suggests that a majority of students use chatbots like ChatGPT for just about everything at school. To write essays. To solve complicated math problems. To find love. 

    Wait, what? 

    Nearly a fifth of students said they or a friend have used artificial intelligence chatbots to form romantic relationships, according to a new survey by the nonprofit Center for Democracy & Technology. Some 42% said they or someone they know used the chatbots for mental health support, as an escape from real life or as a friend.

    Eighty-six percent of students say they’ve used artificial intelligence chatbots in the past academic year — half to help with schoolwork.

    The tech-enabled convenience, researchers conclude, doesn’t come without significant risks for young people. Namely, as AI proliferates in schools — with help from the federal government and a zealous tech industry — on a promise to improve student outcomes, they warn that young people could grow socially and emotionally disconnected from the humans in their lives. 


    In the news

    The latest in Trump’s immigration crackdown: The survey featured above, which quizzed students, teachers and parents, also offers startling findings on immigration enforcement in schools: 
    While more than a quarter of educators said their school collects information about whether a student is undocumented, 17% said their district shares records — including grades and disciplinary information — with immigration enforcement. 

    In the last school year, 13% of teachers said a staff member at their school reported a student or parent to immigration enforcement of their own accord. | Center for Democracy & Technology

    People hold signs as New York City officials speak at a press conference calling for the release of high school student Mamadou Mouctar Diallo outside of the Tweed Courthouse on Aug. 14 in New York City. (Michael M. Santiago/Getty Images)
    • Call for answers: In the wake of immigration enforcement that’s ensnared children, New York congressional Democrats are demanding the feds release information about the welfare of students held in detention, my colleague Jo Napolitano reports. | The 74
    • A 13-year-old boy from Brazil, who has lived in a Boston suburb since 2021 with a pending asylum application, was scooped up by Immigration and Customs Enforcement after local police arrested him on a “credible tip” accusing him of making “a violent threat” against a classmate at school. The boy’s mother said her son wound up in a Virginia detention facility and was “desperate, saying ICE had taken him.” | CNN
    • Chicago teenagers are among a group of activists patrolling the city’s neighborhoods to monitor ICE’s deployment to the city and help migrants avoid arrest. | NPR
    • Immigration agents detained a Chicago Public Schools vendor employee outside a school, prompting educators to move physical education classes indoors out of an “abundance of caution.” | Chicago Sun-Times
    • A Des Moines, Iowa, high schooler was detained by ICE during a routine immigration check-in, placed in a Louisiana detention center and deported to Central America fewer than two weeks later. | Des Moines Register
    • A 15-year-old boy with disabilities — who was handcuffed outside a Los Angeles high school after immigration agents mistook him for a suspect — is among more than 170 U.S. citizens, including nearly 20 children, who have been detained during the first nine months of the president’s immigration push. | PBS

    Trigger warning: After a Washington state teenager hanged himself on camera, the 13-year-old boy’s parents set out to find out what motivated their child to livestream his suicide on Instagram while online users watched. Evidence pointed to a sadistic online group that relies on torment, blackmail and coercion to weed out teens they deem weak. | The Washington Post

    Civil rights advocates in New York are sounding the alarm over a Long Island school district’s new AI-powered surveillance system, which includes round-the-clock audio monitoring with in-classroom microphones. | StateScoop

    A federal judge has ordered the Department of Defense to restock hundreds of books after a lawsuit alleged students were banned from checking out texts related to race and gender from school libraries on military bases in violation of the First Amendment. | Military.com

    More than 600 armed volunteers in Utah have been approved to patrol campuses across the state to comply with a new law requiring armed security. Called school guardians, the volunteers are existing school employees who agree to be trained by local law enforcement and carry guns on campus. | KUER

    Sign-up for the School (in)Security newsletter.

    Get the most critical news and information about students’ rights, safety and well-being delivered straight to your inbox.

    No “Jackass”: Instagram announced new PG-13 content features that restrict teenagers from viewing posts that contain sex, drugs and “risky stunts.” | The Associated Press

    A Tuscaloosa, Alabama, school resource officer restrained and handcuffed a county commissioner after a spat at an elementary school awards program. | Tuscaloosa News

    The number of guns found at Minnesota schools has increased nearly threefold in the last several years, new state data show. | Axios

    More than half of Florida’s school districts received bomb threats on a single evening last week. The threats weren’t credible, officials said, and appeared to be “part of a hoax intended to solicit money.” | News 6


    ICYMI @The74

    RAPID Survey Project, Stanford Center on Early Childhood

    Survey: Nearly Half of Families with Young Kids Struggling to Meet Basic Needs

    Education Department Leans on Right-Wing Allies to Push Civil Rights Probes

    OPINION: To Combat Polarization and Political Violence, Let’s Connect Students Nationwide


    Emotional Support

    Thanks for reading,
    —Marz


    Did you use this article in your work?

    We’d love to hear how The 74’s reporting is helping educators, researchers, and policymakers. Tell us how

    Source link

  • Chatbots in Higher Education: Benefits, Challenges, and Strategies to Prevent Misuse – Faculty Focus

    Chatbots in Higher Education: Benefits, Challenges, and Strategies to Prevent Misuse – Faculty Focus

    Source link

  • Students Increasingly Rely on Chatbots, but at What Cost? – The 74

    Students Increasingly Rely on Chatbots, but at What Cost? – The 74


    Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter

    Students don’t have the same incentives to talk to their professors — or even their classmates — anymore. Chatbots like ChatGPT, Gemini and Claude have given them a new path to self-sufficiency. Instead of asking a professor for help on a paper topic, students can go to a chatbot. Instead of forming a study group, students can ask AI for help. These chatbots give them quick responses, on their own timeline.

    For students juggling school, work and family responsibilities, that ease can seem like a lifesaver. And maybe turning to a chatbot for homework help here and there isn’t such a big deal in isolation. But every time a student decides to ask a question of a chatbot instead of a professor or peer or tutor, that’s one fewer opportunity to build or strengthen a relationship, and the human connections students make on campus are among the most important benefits of college.

    Julia Freeland-Fisher studies how technology can help or hinder student success at the Clayton Christensen Institute. She said the consequences of turning to chatbots for help can compound.

    “Over time, that means students have fewer and fewer people in their corner who can help them in other moments of struggle, who can help them in ways a bot might not be capable of,” she said.

    As colleges further embed ChatGPT and other chatbots into campus life, Freeland-Fisher warns lost relationships may become a devastating unintended consequence.

    Asking for help

    Christian Alba said he has never turned in an AI-written assignment. Alba, 20, attends College of the Canyons, a large community college north of Los Angeles, where he is studying business and history. And while he hasn’t asked ChatGPT to write any papers for him, he has turned to the technology when a blank page and a blinking cursor seemed overwhelming. He has asked for an outline. He has asked for ideas to get him started on an introduction. He has asked for advice about what to prioritize first.

    “It’s kind of hard to just start something fresh off your mind,” Alba said. “I won’t lie. It’s a helpful tool.” Alba has wondered, though, whether turning to ChatGPT with these sorts of questions represents an overreliance on AI. But Alba, like many others in higher education, worries primarily about AI use as it relates to academic integrity, not social capital. And that’s a problem.

    Jean Rhodes, a psychology professor at the University of Massachusetts Boston, has spent decades studying the way college students seek help on campus and how the relationships formed during those interactions end up benefitting the students long-term. Rhodes doesn’t begrudge students integrating chatbots into their workflows, as many of their professors have, but she worries that students will get inferior answers to even simple-sounding questions, like, “how do I change my major?”

    A chatbot might point a student to the registrar’s office, Rhodes said, but had a student asked the question of an advisor, that person may have asked important follow-up questions — why the student wants the change, for example, which could lead to a deeper conversation about a student’s goals and roadblocks.

    “We understand the broader context of students’ lives,” Rhodes said. “They’re smart but they’re not wise, these tools.”

    Rhodes and one of her former doctoral students, Sarah Schwartz, created a program called Connected Scholars to help students understand why it’s valuable to talk to professors and have mentors. The program helped them hone their networking skills and understand what people get out of their networks over the course of their lives — namely, social capital.

    Connected Scholars is offered as a semester-long course at U Mass Boston, and a forthcoming paper examines outcomes over the last decade, finding students who take the course are three times more likely to graduate. Over time, Rhodes and her colleagues discovered that the key to the program’s success is getting students past an aversion to asking others for help.

    Students will make a plethora of excuses to avoid asking for help, Rhodes said, ticking off a list of them: “‘I don’t want to stand out,’ ‘I don’t want people to realize I don’t fit in here,’ ‘My culture values independence,’ ‘I shouldn’t reach out,’ ‘I’ll get anxious,’ ‘This person won’t respond.’ If you can get past that and get them to recognize the value of reaching out, it’s pretty amazing what happens.”

    Connections are key

    Seeking human help doesn’t only leave students with the resolution to a single problem, it gives them a connection to another person. And that person, down the line could become a friend, a mentor or a business partner — a “strong tie,” as social scientists describe their centrality to a person’s network. They could also become a “weak tie” who a student may not see often, but could, importantly, still offer a job lead or crucial social support one day.

    Daniel Chambliss, a retired sociologist from Hamilton College, emphasized the value of relationships in his 2014 book, “How College Works,” co-authored with Christopher Takacs. Over the course of their research, the pair found that the key to a successful college experience boiled down to relationships, specifically two or three close friends and one or two trusted adults. Hamilton College goes out of its way to make sure students can form those relationships, structuring work-study to get students into campus offices and around faculty and staff, making room for students of varying athletic abilities on sports teams, and more.

    Chambliss worries that AI-driven chatbots make it too easy to avoid interactions that can lead to important relationships. “We’re suffering epidemic levels of loneliness in America,” he said. “It’s a really major problem, historically speaking. It’s very unusual, and it’s profoundly bad for people.”

    As students increasingly turn to artificial intelligence for help and even casual conversation, Chambliss predicted it will make people even more isolated: “It’s one more place where they won’t have a personal relationship.”

    In fact, a recent study by researchers at the MIT Media Lab and OpenAI found that the most frequent users of ChatGPT — power users — were more likely to be lonely and isolated from human interaction.

    “What scares me about that is that Big Tech would like all of us to be power users,” said Freeland-Fisher. “That’s in the fabric of the business model of a technology company.”

    Yesenia Pacheco is preparing to re-enroll in Long Beach City College for her final semester after more than a year off. Last time she was on campus, ChatGPT existed, but it wasn’t widely used. Now she knows she’s returning to a college where ChatGPT is deeply embedded in students’ as well as faculty and staff’s lives, but Pacheco expects she’ll go back to her old habits — going to her professors’ office hours and sticking around after class to ask them questions. She sees the value.

    She understands why others might not. Today’s high schoolers, she has noticed, are not used to talking to adults or building mentor-style relationships. At 24, she knows why they matter.

    “A chatbot,” she said, “isn’t going to give you a letter of recommendation.”

    This article was originally published on CalMatters and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license.


    Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter

    Source link

  • Same old playbook, new target: AI chatbots

    Same old playbook, new target: AI chatbots

    Chatbots are already transforming how people access information, express themselves, and connect with others. From personal finance to mental health, these tools are becoming an everyday part of digital life. But as their use grows, so does the urgency to protect the First Amendment rights of both developers and users.

    That’s because some state lawmakers are pursuing a familiar regulatory approach: requiring things like blanket age verification, rigid time limits, and mandated lockouts on use. But like other means of digital communication, the development and use of chatbots have First Amendment protection, so any efforts to regulate them must carefully navigate significant constitutional considerations.

    Prompting a chatbot involves … the user choosing words to communicate ideas, seek information, or express thoughts. That act of communication is protected under the First Amendment, even when software generates the specific response.

    Take New York’s S 5668, which would make every user, including adults, verify their age before chatting, and would fine chatbot providers when a “misleading” or “harmful” reply “results in” any kind of demonstrable harm to the user. This is, in effect, a breathtakingly broad “misinformation” bill that would permit the government to punish speech it deems false — or true but subjectively harmful — whenever it can point to a supposed injury. This is inconsistent with the First Amendment, which precludes the government from regulating chatbot speech it thinks is misleading or harmful — just as it does with any other expression.

    S 5668 would also require that certain companion bots be shut down for 24 hours whenever expressions of potential self-harm are detected, complementing a newly enacted New York prohibition that requires companion chatbots to include protocols to detect and address expressions of self-harm and direct users to crisis services. Both the bill and the new law also require chatbots to remind users that they are AI and not a human being. 

    Sound familiar? States like California, Utah, Arkansas, Florida, and Texas all attempted similar regulatory measures targeting another digital speech technology: social media. Those efforts have resulted in several court injunctionsrepealsvetoes, and blocked implementation because they violated the First Amendment rights of the platforms and users. 

    New York is just one of a few states that have introduced similar chatbot legislation. Minnesota’s SF 1857 requires age verification while flatly banning anyone under age 18 from “recreational” chatbots. California’s SB 243 targets undefined “rewarding” chat features, leaving developers to guess what speech is off-limits and pressuring them to censor conversations.

    As we’ve said before, the First Amendment doesn’t evaporate when the speaker’s words depend on computer code. From the printing press to the internet, and now AI, each leap in expressive technology remains under its protective umbrella.  

    This is not because the machine itself has rights; rather, it’s protected by the rights of the developer who created the chatbot and of the users who create the prompts. Just like asking a question in a search engine or posting on social media and the responses they generate, prompting a chatbot involves a developer’s expressive design and the user choosing words to communicate ideas, seek information, or express thoughts. That act of communication is protected under the First Amendment, even when software generates the specific response.

    FIRE will keep speaking out against these bills, which show a growing pattern of government overreach into First Amendment rights when it comes to digital speech. 

    Source link