Tag: legislative

  • A legislative solution to student suicide prevention: advocating for opt-out consent in response to student welfare concerns

    A legislative solution to student suicide prevention: advocating for opt-out consent in response to student welfare concerns

    Authored by Dr Emma Roberts, Head of Law at the University of Salford.

    The loss of a student to suicide is a profound and heartbreaking tragedy, leaving families and loved ones devastated, while exposing critical gaps in the support systems within higher education. Each death is not only a personal tragedy but also a systemic failure, underscoring the urgent need for higher education institutions to strengthen their safeguarding frameworks.

    Recent government data revealed that 5.7% of home students disclosed a mental health condition to their university in 2021/22, a significant rise from under 1% in 2010/11. Despite this growing awareness of mental health challenges, the higher education sector is grappling with the alarming persistence of student suicides.

    The Office for National Statistics (ONS) reported a rate of 3.0 deaths per 100,000 students in England and Wales in the academic year ending 2020, equating to 64 lives lost. Behind each statistic lies a grieving family, unanswered questions and the haunting possibility that more could have been done. These statistics force universities to confront uncomfortable truths about their ability to support vulnerable students.

    The time for piecemeal solutions has passed. To confront this crisis, bold and systemic reforms are required. One such reform – the introduction of an opt-out consent system for welfare contact – has the potential to transform how universities respond to students in crisis.

    An opt-out consent model

    At present, universities typically rely on opt-in systems, where students are asked to nominate a contact to be informed in emergencies. This has come to be known as the Bristol consent model. Where this system exists, they are not always invoked when students face severe mental health challenges. The reluctance often stems from concerns about breaching confidentiality laws and the fear of legal repercussions. This hesitancy can result in critical delays in involving a student’s support network at the time when their wellbeing may be most at risk, leaving universities unable to provide timely, life-saving interventions. Moreover, evidence suggests that many students, particularly those experiencing mental health challenges, fail to engage with these systems, leaving institutions unable to notify loved ones when serious concerns arise.

    Not all universities have such a system in place. And some universities, while they may have a ‘nominated person’ process, lack the infrastructure to appropriately engage the mechanism of connecting with the emergency contact when most needed.

    An opt-out consent model would reverse this default, automatically enrolling students into a system where a trusted individual – such as a parent, guardian or chosen contact – can be notified if their wellbeing raises grave concerns. Inspired by England and Wales’ opt-out system for organ donation, this approach would prioritise safeguarding without undermining student autonomy.

    Confidentiality must be balanced with the need to protect life. An opt-out model offers precisely this balance, creating a proactive safety net that supports students while respecting their independence.

    Legislative provision

    For such a system to succeed, it must be underpinned by robust legislation and practical safeguards. Key measures would include:

    1. Comprehensive communication: universities must clearly explain the purpose and operation of the opt-out system during student onboarding, ensuring that individuals are fully informed of their rights and options.
    2. Defined triggers: criteria for invoking welfare contact must be transparent and consistently applied. This might include extended absences, concerning behavioural patterns or explicit threats of harm.
    3. Regular reviews: students should have opportunities to update or withdraw their consent throughout their studies, ensuring the system remains flexible and respectful of changing personal circumstances.
    4. Privacy protections: institutions must share only essential information with the nominated contact, ensuring the student’s broader confidentiality is preserved.
    5. Staff training: university staff, including academic and professional services personnel, must receive regular training on recognising signs of mental health crises, navigating confidentiality boundaries and ensuring compliance with the opt-out system’s requirements. This training would help ensure interventions are timely, appropriate and aligned with legal and institutional standards.
    6. Reporting and auditing: universities should implement robust reporting and auditing mechanisms to assess the effectiveness of the opt-out system. This should include maintaining records of instances where welfare contact was invoked, monitoring outcomes and conducting periodic audits to identify gaps or areas for improvement. Transparent reporting would not only enhance accountability but also foster trust among stakeholders.

    Lessons from the organ donation model

    The opt-out system for organ donation introduced in both Wales and England demonstrates the effectiveness of reframing consent to drive societal benefit. Following its implementation, public trust was maintained and the number of registered organ donors increased. A similar approach in higher education could establish a proactive baseline for safeguarding without coercing students into participation.

    Addressing legal and cultural barriers

    A common barrier to implementing such reforms is the fear of overstepping legal boundaries. Currently, universities are hesitant to breach confidentiality, even in critical situations, for fear of breaching trust and privacy and prompting litigation. Enshrining the opt-out system in law to include the key measures listed above would provide institutions with the clarity and confidence to act decisively, ensuring consistency across the sector. Culturally, universities must address potential scepticism by engaging students, staff and families in dialogue about the system’s goals and safeguards.

    The need for legislative action

    To ensure the successful implementation of an opt-out consent system, decisive actions are required from both the government and higher education institutions. The government must take the lead by legislating the introduction of this system, creating a consistent, sector-wide approach to safeguarding student wellbeing. Without legislative action, universities will remain hesitant, lacking the legal clarity and confidence needed to adopt such a bold model.

    Legislation is the only way to ensure every student, regardless of where they study, receives the same high standard of protection, ending the current postcode lottery in safeguarding practices across the sector.

    A call for collective action

    Universities, however, must not wait idly for legislation to take shape. They have a moral obligation to begin addressing the gaps in their welfare notification systems now. By expanding or introducing opt-in systems as an interim measure, institutions can begin closing these gaps, gathering critical data and refining their practices in readiness for a sector-wide transition.

    Universities should unite under sector bodies to lobby the government for legislative reform, demonstrating their collective commitment to safeguarding students. Furthermore, institutions must engage their communities – students, staff and families – in a transparent dialogue about the benefits and safeguards of the opt-out model, ensuring a broad base of understanding and support for its eventual implementation.

    This dual approach of immediate institutional action paired with long-term legislative reform represents a pragmatic and proactive path forward. Universities can begin saving lives today while laying the groundwork for a robust, consistent and legally supported safeguarding framework for the future.

    Setting a New Standard for Student Safeguarding

    The rising mental health crisis among students demands more than institutional goodwill – it requires systemic change. While the suicide rate among higher education students is lower than in the general population, this should not be a cause for complacency. Each loss is a profound tragedy and a clear signal that systemic improvements are urgently needed to save lives. Higher education institutions have a duty to prioritise student wellbeing and must ensure that their environments offer the highest standards of safety and support. An opt-out consent system for welfare contact is not a panacea, but it represents a critical step towards creating safer and more supportive university environments.

    The higher education sector has long recognised the importance of student wellbeing, yet its current frameworks remain fragmented and reactive. This proposal is both bold and achievable. It aligns with societal trends towards proactive safeguarding, reflects a compassionate approach to student welfare and offers a legally sound mechanism to prevent future tragedies.

    The loss of 64 students to suicide in a single academic year is a stark reminder that the status quo is failing. By adopting an opt-out consent system, universities can create a culture of care that saves lives, supports grieving families and fulfils their duty to protect students.

    The time to act is now. With legislative backing and sector-wide commitment, this reform could become a cornerstone of a more compassionate and effective national response to student suicide prevention.

    Source link

  • FIRE kicks off legislative season by opposing speech-restrictive AI bill

    FIRE kicks off legislative season by opposing speech-restrictive AI bill

    The legislative season is in full swing, and FIRE is already tackling a surge of speech-restrictive bills. We started with Washington’s House Bill 1170, which would require AI-generated content to include a disclosure.  

    FIRE Legislative Counsel John Coleman testified in opposition to the bill. In his testimony, John emphasized what FIRE has been saying for years, that the “government can no more compel an artist to disclose whether they created a painting from a human model as opposed to a mannequin than it can compel someone to disclose that they used artificial intelligence tools in creating an expressive work.” 

    Artificial intelligence, like earlier technologies such as the printing press, the camera, and the internet, has the power to revolutionize communication. The First Amendment protects the use of all these mediums for expression and forbids government interference under most circumstances. Importantly, the First Amendment protects not only the right to speak without fear of government retaliation but also the right not to speak. Government-mandated disclosures relating to speech, like those required under HB 1170, infringe on these protections and so are subject to heightened levels of First Amendment scrutiny. 

    FIRE remains committed to defending the free speech rights of all Americans and will continue to advocate against overbroad policies that stifle innovation and expression.

    Of course, as John stated, “Developers and users can choose to disclose their use of AI voluntarily, but government-compelled speech, whether that speech is an opinion or fact or even just metadata . . . undermines everyone’s fundamental autonomy to control their own expression.”

    In fact, the U.S. Court of Appeals for the Ninth Circuit (which includes Washington state) reiterated this fundamental principle just last year in X Corp. v. Bonta when it blocked a California law requiring social media platforms to publish information about their content moderation practices. Judge Milan D. Smith, Jr. acknowledged the government’s stated interest in transparency, but emphasized that “even ‘undeniably admirable goals’ ‘must yield’ when they ‘collide with the . . . Constitution.’”

    This principle is likely to put HB 1170 in significant legal jeopardy.

    FIRE statement on legislative proposals to regulate artificial intelligence

    News

    Existing laws and First Amendment doctrine already address the vast majority of concerns that legislators are seeking to address.


    Read More

    Another major problem with the policy embodied by HB 1170 is that it would apply to all AI-generated media rather than targeting a specific problem, like unlawful deceptive uses of AI, such as defamation. John pointed out to lawmakers that “if the intent of the bill is to root out deceptive uses of AI, this bill would do the opposite” by fostering the false impression that all AI-generated media is deceptive. In reality, AI-generated media — like all media — can be used to share both truth and falsehood. 

    Moreover, people using AI to commit actual fraud will likely find ways to avoid disclosing that AI was used, whether by removing evidence of AI use or using tools from states without disclosure requirements. As a result, this false content will appear more legitimate than it would in a world without the disclosures required by this bill because people will be more likely to believe that content lacking the mandated disclosure was not created with AI.

    Rather than preemptively imposing blanket rules that will stifle free expression, lawmakers should instead assess whether existing legal frameworks sufficiently address the concerns they have with AI. 

    FIRE remains committed to defending the free speech rights of all Americans and will continue to advocate against overbroad policies that stifle innovation and expression.

    Source link

  • FIRE statement on legislative proposals to regulate artificial intelligence

    FIRE statement on legislative proposals to regulate artificial intelligence

    As the 2025 legislative calendar begins, FIRE is preparing for lawmakers at both the state and federal levels to introduce a deluge of bills targeting artificial intelligence. 

    The First Amendment applies to artificial intelligence just as it does to other expressive technologies. Like the printing press, the camera, and the internet, AI can be used as an expressive tool — a technological advance that helps us communicate with one another and generate knowledge. As FIRE Executive Vice President Nico Perrino argued in The Los Angeles Times last month: “The Constitution shouldn’t be rewritten for every new communications technology.” 

    We again remind legislators that existing laws — cabined by the narrow, well-defined exceptions to the First Amendment’s broad protection — already address the vast majority of harms legislatures may seek to counter in the coming year. Laws prohibiting fraud, forgery, discrimination, and defamation, for example, apply regardless of how the unlawful activity is ultimately carried out. Liability for unlawful acts properly falls on the perpetrator of those acts, not the informational or communicative tools they use. 

    Some legislative initiatives seeking to govern the use of AI raise familiar First Amendment problems. For example, regulatory proposals that would require “watermarks” on artwork created by AI or mandate disclaimers on content generated by AI violate the First Amendment by compelling speech. FIRE has argued against these kinds of efforts to regulate the use of AI, and we will continue to do so — just as we have fought against government attempts to compel speech in school, on campus, or online

    Rather than compelling disclaimers or imposing content-based restrictions on AI-generated expression, legislators should remember the law already protects against defamation, fraud, and other illegal conduct. 

    Lawmakers have also sought to regulate or even criminalize the use of AI-generated content in election-related communications. But courts have been wary of legislative attempts to control AI’s output when political speech is implicated. Following a First Amendment challenge from a satirist who uses AI to generate parodies of political figures, for example, a federal district court recently enjoined a California statute aimed at “deepfakes” that regulated “materially deceptive” election-related content. 

    Content-based restrictions like California’s law require strict judicial scrutiny, no matter how the expression is created. As the federal court noted, the constitutional protections “safeguarding the people’s right to criticize government and government officials apply even in the new technological age when media may be digitally altered.” So while lawmakers might harbor “a well-founded fear of a digitally manipulated media landscape,” the court explained, “this fear does not give legislators unbridled license to bulldoze over the longstanding tradition of critique, parody, and satire protected by the First Amendment.” 

    Artificial intelligence, free speech, and the First Amendment

    Issue Pages

    FIRE offers an analysis of frequently asked questions about artificial intelligence and its possible implications for free speech and the First Amendment.


    Read More

    Other legislative proposals threaten the First Amendment by imposing burdens directly on the developers of AI models. In the coming months, for example, Texas lawmakers will consider the Texas Responsible Artificial Intelligence Governance Act, or TRAIGA, a sweeping bill that would impose liability on developers, distributors, and deployers of AI systems that may introduce a risk of “algorithmic discrimination,” including by private actors. The bill vests broad regulatory authority in a newly created state “Artificial Intelligence Council” and imposes steep compliance costs. TRAIGA compels developers to publish regular risk reports, a requirement that will raise First Amendment concerns when applied to an AI model’s expressive output or the use of AI as a tool to facilitate protected expression. Last year, a federal court held a similar reporting requirement imposed on social media platforms was likely unconstitutional.

    TRAIGA’s provisions incentivize AI developers to handicap their models to avoid any possibility of offering recommendations that some might deem discriminatory or simply offensive — even if doing so curtails the models’ usefulness or capabilities. Addressing unlawful discrimination is an important legislative aim, and lawmakers are obligated to ensure we all benefit from the equal protection of the law. At the same time, our decades of work defending student and faculty rights has left FIRE all too familiar with the chilling effect on speech that results from expansive or arbitrary interpretations of anti-discrimination law on campus. We will oppose poorly crafted legislative efforts that would functionally build the same chill into artificial intelligence systems.

    The sprawling reach of legislative proposals like TRAIGA run headlong into the expressive rights of the people building and using AI models. Rather than compelling disclaimers or imposing content-based restrictions on AI-generated expression, legislators should remember the law already protects against defamation, fraud, and other illegal conduct. And rather than preemptively saddling developers with broad liability for an AI model’s possible output, lawmakers must instead examine the recourse existing laws already provide victims of discrimination against those who would use AI — or any other communicative tool — to unlawful ends.

    FIRE will have more to say on the First Amendment threats presented by legislative proposals regarding AI in the weeks and months to come.

    Source link