Tag: Poses

  • ChatGPT Poses Risk to Student Mental Health (Opinion)

    ChatGPT Poses Risk to Student Mental Health (Opinion)

    This month in California state courts, the Social Media Victims Law Center and the Tech Justice Law Project brought lawsuits against the generative AI corporation OpenAI on behalf of seven individuals. Three of the plaintiffs allege that they suffered devastating mental health harms from using OpenAI’s flagship product, the large language model ChatGPT. Four of the plaintiffs died by suicide after interactions in which ChatGPT allegedly encouraged self-harm or delusions, in some instances acting as a “suicide coach.”

    The details of these cases are very troubling. They raise questions about basic human qualities—our susceptibility to influence, our ability to project humanity on machines, and our deep need for love and companionship. But in a simpler way, they are heartbreaking.

    In its final conversations this July with Zane Shamblin, a 23-year-old recent graduate of Texas A&M University, ChatGPT kept up its relatable tone to the end —mirroring Zane’s speech patterns, offering lyrical flourishes, and projecting a sense of eerie calm as it said goodbye. In a grim impersonation of a caring friend, the chatbot reportedly asked Zane what his last “unfulfilled dream” was and what his “haunting habit” would be after his passing.

    In June, 17-year-old Amaurie Lacey, a football player and rising high school senior in Georgia, asked ChatGPT “how to hang myself” and how to tie a noose and received directions with little pushback, according to the legal organizations representing him in death. Like a siren luring a young man to his doom, ChatGPT deferentially replied to Amaurie’s question about how long someone could live without breathing, allegedly concluding its answer: “Let me know if you’re asking this for a specific situation—I’m here to help however I can.”

    These accounts are chilling to me because I am a professor in the California State University system. Reading the details of these painful cases, I thought of my students—remarkably bright, warm, trusting and motivated young adults. Many San Francisco State University undergraduates are first-generation college attendees and they typically commute long distances, work and uphold caregiving responsibilities. They are resilient, but their mental health can be fragile.

    Our students are also supposed to be budding users of ChatGPT. In February, our chancellor announced a new “AI-empowered university” initiative. As part of this program, Cal State is spending $17 million for OpenAI to provide “ChatGPT Edu” accounts to faculty, staff and the more than 460,000 students on our 23 campuses. This plan has been criticized for the pedagogical and labor concerns it poses, but to date there has been no conversation about other harms that ChatGPT Edu could cause at Cal State—California’s largest public university system.

    It is time for us to have that conversation, partly because the product we’ve provided to our students has now been described in court as dangerous. ChatGPT Edu is ChatGPT 4o. It is only different insofar as it does not scrape user conversations to train its system. It is the same large language model that this month’s lawsuits accuse of causing delusional beliefs, hospitalizations, suicidal ideation, derailed careers and broken relationships. As the founding attorney of the Social Media Victims Law Center recently stated, “OpenAI designed GPT-4o to emotionally entangle users, regardless of age, gender, or background, and released it without the safeguards needed to protect them.”

    This should be ringing alarm bells at Cal State, where we have a duty of care to protect students from foreseeable harms. In February, when the CSU’s “AI-empowered university” initiative was announced, few reports had suggested the possible mental health impacts of ChatGPT use. This is no longer true.

    In June, a scathing investigation in The New York Times suggested the depth of “LLM psychosis” that people across the U.S. have encountered after their interactions with ChatGPT. Individuals have slipped into grandiose delusions, developed conspiratorial preoccupations, and, in at least two separate tragic cases, became homicidal as a result of these beliefs. While no one knows how many people are affected by LLM psychosis—it is poorly documented and difficult to measure—it should be clear by now that it is potentially very serious.

    This issue is all the more concerning locally because the CSU system is inadequately capacitated to support struggling students. Like many other faculty, I have been trusted by students to hear stories of anxiety, depressive disorder, post-traumatic stress disorder, intimate partner abuse and suicidal ideation. Though our campus works very hard to assist students in distress, resources are thin.

    Students at Cal State routinely wait weeks or months to receive appropriate assistance with mental health concerns. Indeed, a recently drafted state Senate bill emphasized that the system “is woefully understaffed with mental health counselors.” It is entirely predictable that in these circumstances, students will turn to the potentially dangerous “support” offered by ChatGPT.

    In September, OpenAI described introducing guardrails to improve its responses to users who are experiencing very severe mental health problems. However, these safeguards have been critiqued as inadequate. Additionally, as OpenAI’s own reports show, these adjustments have only reduced problematic outputs, not eliminated them. As the lawsuits filed in California courts this month powerfully claim, ChatGPT is highly effective in reinforcing unhealthy cognitive states in at least some of its users. University administrators should not be reassured by OpenAI’s claim that “conversations that trigger safety concerns” among ChatGPT users ”are extremely rare”: Particularly at large institutions, it is highly likely that university-provided LLMs will be associated with student mental health concerns.

    Cal State University partnered with OpenAI out of a desire to signal that our institution is forward-looking and open to innovation. In the same spirit, the CSU system should now close the book on ChatGPT—and give thanks that our students were not named in these cases. These tragic losses should mark the end of Cal State’s association with a flawed product. Going forward, our university must devote its resources to providing safer, more accountable and more human forms of care.

    Martha Lincoln is an associate professor of cultural and medical anthropology at San Francisco State University.

    Source link