Tag: Disclosure

  • Donor disclosure and campaign finance at SCOTUS

    Donor disclosure and campaign finance at SCOTUS

    The Institute for Free Speech’s Bradley Smith and Brett Nolan join the show to discuss two upcoming Supreme Court arguments involving donor disclosure (First Choice Women’s Resource Centers, Inc. v. Platkin) and political party contributions to candidates (National Republican Senatorial Committee v. FEC).

    The conversation also explores the broader landscape for political speech and campaign regulation, what legal battles may be next for the Supreme Court, and how both guests found their way into First Amendment advocacy.

    Timestamps:

    00:00 Intro

    01:32 What is the Institute for Free Speech?

    02:39 Personal paths into free speech work

    05:10 First Choice Women’s Resource Centers, Inc. v. Platkin

    32:08 NRSC v. FEC

    51:50 What’s next for campaign finance at SCOTUS?

    54:58 Outro

    Enjoy listening to the podcast? Donate to FIRE today and get exclusive content like member webinars, special episodes, and more. If you became a FIRE Member through a donation to FIRE at thefire.org and would like access to Substack’s paid subscriber podcast feed, please email [email protected].

    Source link

  • The Case Against AI Disclosure Statements (opinion)

    The Case Against AI Disclosure Statements (opinion)

    I used to require my students submit AI disclosure statements any time they used generative AI on an assignment. I won’t be doing that anymore.

    From the beginning of our current AI-saturated moment, I leaned into ChatGPT, not away, and was an early adopter of AI in my college composition classes. My early adoption of AI hinged on the need for transparency and openness. Students had to disclose to me when and how they were using AI. I still fervently believe in those values, but I no longer believe that required disclosure statements help us achieve them.

    Look. I get it. Moving away from AI disclosure statements is antithetical to many of higher ed’s current best practices for responsible AI usage. But I started questioning the wisdom of the disclosure statement in spring 2024, when I noticed a problem. Students in my composition courses were turning in work that was obviously created with the assistance of AI, but they failed to proffer the required disclosure statements. I was puzzled and frustrated. I thought to myself, “I allow them to use AI; I encourage them to experiment with it; all I ask is that they tell me they’re using AI. So, why the silence?” Chatting with colleagues in my department who have similar AI-permissive attitudes and disclosure requirements, I found they were experiencing similar problems. Even when we were telling our students that AI usage was OK, students still didn’t want to fess up.

    Fess up. Confess. That’s the problem.

    Mandatory disclosure statements feel an awful lot like a confession or admission of guilt right now. And given the culture of suspicion and shame that dominates so much of the AI discourse in higher ed at the moment, I can’t blame students for being reluctant to disclose their usage. Even in a class with a professor who allows and encourages AI use, students can’t escape the broader messaging that AI use should be illicit and clandestine.

    AI disclosure statements have become a weird kind of performative confession: an apology performed for the professor, marking the honest students with a “scarlet AI,” while the less scrupulous students escape undetected (or maybe suspected, but not found guilty).

    As well intentioned as mandatory AI disclosure statements are, they have backfired on us. Instead of promoting transparency and honesty, they further stigmatize the exploration of ethical, responsible and creative AI usage and shift our pedagogy toward more surveillance and suspicion. I suggest that it is more productive to assume some level of AI usage as a matter of course, and, in response, adjust our methods of assessment and evaluation while simultaneously working toward normalizing the usage of AI tools in our own work.

    Studies show that AI disclosure carries risks both in and out of the classroom. One study published in May reports that any kind of disclosure (both voluntary and mandatory) in a wide variety of contexts resulted in decreased trust in the person using AI (this remained true even when study participants had prior knowledge of an individual’s AI usage, meaning, the authors write, “The observed effect can be attributed primarily to the act of disclosure rather than to the mere fact of AI usage.”)

    Another recent article points to the gap present between the values of honesty and equity when it comes to mandatory AI disclosure: People won’t feel safe to disclose AI usage if there’s an underlying or perceived lack of trust and respect.

    Some who hold unfavorable attitudes toward AI will point to these findings as proof that students should just avoid AI usage altogether. But that doesn’t strike me as realistic. Anti-AI bias will only drive student AI usage further underground and lead to fewer opportunities for honest dialogue. It also discourages the kind of AI literacy employers are starting to expect and require.

    Mandatory AI disclosure for students isn’t conducive to authentic reflection but is instead a kind of virtue signaling that chills the honest conversation we should want to have with our students. Coercion only breeds silence and secrecy.

    Mandatory AI disclosure also does nothing to curb or reduce the worst features of badly written AI papers, including the vague, robotic tone; the excess of filler language; and, their most egregious hallmark, the fabricated sources and quotes.

    Rather than demanding students confess their AI crimes to us through mandatory disclosure statements, I advocate both a shift in perspective and a shift of assignments. We need to move from viewing students’ AI assistance as a special exception warranting reactionary surveillance to accepting and normalizing AI usage as a now commonplace feature of our students’ education.

    That shift does not mean we should allow and accept any and all student AI usage. We shouldn’t resign ourselves to reading AI slop that a student generates in an attempt to avoid learning. When confronted with a badly written AI paper that sounds nothing like the student who submitted it, the focus shouldn’t be on whether the student used AI but on why it’s not good writing and why it fails to satisfy the assignment requirements. It should also go without saying that fake sources and quotes, regardless of whether they are of human or AI origin, should be called out as fabrications that won’t be tolerated.

    We have to build assignments and evaluation criteria that disincentivize the kinds of unskilled AI usage that circumvent learning. We have to teach students basic AI literacy and ethics. We have to build and foster learning environments that value transparency and honesty. But real transparency and honesty require safety and trust before they can flourish.

    We can start to build such a learning environment by working to normalize AI usage with our students. Some ideas that spring to mind include:

    • Telling students when and how you use AI in your own work, including both successes and failures in AI usage.
    • Offering clear explanations to students about how they could use AI productively at different points in your class and why they might not want to use AI at other points. (Danny Liu’s Menus model is an excellent example of this strategy.)
    • Adding an assignment such as an AI usage and reflection journal, which offers students a low-stakes opportunity to experiment with AI and reflect upon the experience.
    • Adding an opportunity for students to present to the class on at least one cool, weird or useful thing that they did with AI (maybe even encouraging them to share their AI failures, as well).

    The point with these examples is that we are inviting students into the messy, exciting and scary moment we all find ourselves in. They shift the focus away from coerced confessions to a welcoming invitation to join in and share their own wisdom, experience and expertise that they accumulate as we all adjust to the age of AI.

    Julie McCown is an associate professor of English at Southern Utah University. She is working on a book about how embracing AI disruption leads to more engaging and meaningful learning for students and faculty.

    Source link

  • NLRB Issues Memo Outlining Higher Ed Institutions’ Disclosure Obligations under NLRA and FERPA – CUPA-HR

    NLRB Issues Memo Outlining Higher Ed Institutions’ Disclosure Obligations under NLRA and FERPA – CUPA-HR

    by CUPA-HR | August 7, 2024

    On August 6, National Labor Relations Board (NLRB) General Counsel Jennifer Abruzzo issued a memo, “Clarifying Universities’ and Colleges’ Disclosure Obligations under the National Labor Relations Act and the Family Educational Rights and Privacy Act.” The memo was issued to all NLRB regional offices and is meant to provide guidance to institutions of higher education clarifying their obligations “in cases involving the duty to furnish information where both statutes may be implicated.”

    The memorandum outlines how institutions can comply with requests by unions representing their student workers for information that may be covered under FERPA, the federal law that protects students’ privacy in relation to their education records and applies to institutions that receive federal education funds. Under the NLRA, employers are required to provide certain information to unions that may be relevant to their representational and collective bargaining obligations, but this requirement can come into conflict with institutions’ obligations under FERPA.

    In situations where the employer believes certain records requested by the union may be confidential and covered under FERPA, the memo outlines the steps institutions must take to comply with their disclosure obligations.

    1. “The institution must determine whether the request seeks education records or personally identifiable information contained therein.”

    Institutions must be prepared to “explain why and substantiate with documentary evidence, if available, that the student-employee is employed as a result of their status as a student to the union,” as opposed to a traditional employee whose records are not protected by FERPA. The memo specifies that, if the union’s request includes some documents not covered by FERPA, the employer must provide those documents to the union “without delay, even if FERPA applies to other parts of the request.”

    1. “If a request seeks information protected by FERPA, the institution must offer a reasonable accommodation in a timely manner and bargain in good faith with the union toward a resolution of the matter.”

    The memo puts the burden to offer an alternative on the employer. The employer cannot “simply refuse to furnish the requested information,” but it must offer a “reasonable accommodation and bargain in good faith toward an agreement that addresses both parties’ interests.”

    1. “If the parties reach an agreement over an accommodation, the institution must abide by that agreement and furnish the records.”

    If an agreement is not reached, the memo specifies that the union can file an unfair labor practice charge against the institution. The memo then gives the NLRB the authority to find an appropriate accommodation “in light of the parties’ bargaining proposals.”

    Abruzzo also provided a “FERPA consent template” that she advocates institutions provide to student-employees during the onboarding process. The template, if signed by the student employee, “would permit an institution covered by FERPA to disclose to a union, consistent with FERPA, any employment-related records of a student that are relevant and reasonably necessary for each stage of the representation process.” Abruzzo argues the template would help “reduce delay and obviate the need to seek students’ consent at the time a union seeks to represent employees or submits an information request to carry out its representative functions.”

    CUPA-HR will keep members apprised of updates following this guidance and other updates from the NLRB.



    Source link