Tag: Laws

  • Three Laws for Curriculum Design in an AI Age (opinion)

    Three Laws for Curriculum Design in an AI Age (opinion)

    Almost a third of students report that they don’t know how or when to use generative AI to help with coursework. On our campus, students tell us that they worry if they don’t learn how to use AI, they will be left behind in the workforce. At the same time, many students worry that technology undermines their learning.

    Here’s Gabby, an undergraduate on our campus: “It turned my writing into something I didn’t say. It makes it harder for me to think of my ideas and makes everything I think go away. It replaces it with what is official. It is correct, and I have a hard time not agreeing with it once ChatGPT says it. It overrides me.”

    Students experience additional anxiety around accusations of unauthorized use of AI tools—even when they are not using them. Here’s another student: “If I write like myself, I get points off for not following the rubric. If I fix my grammar and follow the template, my teacher will look at me and assume I used ChatGPT because brown people can’t write good enough.”

    Faculty guidance in the classroom is critical to addressing these concerns, especially as campuses increasingly provide students with access to enterprise GPTs. Our own campus system, California State University, recently rolled out an AI strategy that includes a “landmark” partnership with companies such as OpenAI, and a free subscription to Chat GPT Edu for all students, faculty and staff.

    Perhaps unsurprisingly, students are not the only ones who feel confused and worried about AI in this fast-moving environment. Faculty also express confusion about whether and under what circumstances it is OK for their students to use AI technology. In our roles at San Francisco State University’s Center for Equity and Excellence in Teaching and Learning (CEETL), we are often asked about the need for campuswide policies and the importance of tools like Turnitin to ensure academic integrity.

    As Kyle Jensen noted at a recent American Association of Colleges and Universities event on AI and pedagogy, higher ed workers are experiencing a perceived lack of coherent leadership around AI, and an uneven delivery of information about it, in the face of the many demands on faculty and administrative time. Paradoxically, faculty are both keenly interested in the positive potential of AI technologies and insistent on the need for some sort of accountability system that punishes students for unauthorized use of AI tools.

    The need for faculty to clarify the role of AI in the curriculum is pressing. To address this at CEETL, we have developed what we are calling “Three Laws of Curriculum in the Age of AI,” a play on Isaac Asimov’s “Three Laws of Robotics,” written to ensure that humans remained in control of technology. Our three laws are not laws, per se; they are a framework for thinking about how to address AI technology in the curriculum at all levels, from the individual classroom to degree-level road maps, from general education through graduate courses. The framework is designed to support faculty as they work their way through the challenges and promises of AI technologies. The framework lightens the cognitive load for faculty by connecting AI technology to familiar ways of designing and revising curriculum.

    The first law concerns what students need to know about AI, including how the tools work as well as their social, cultural, environmental and labor impacts; potential biases; tendencies toward hallucinations and misinformation; and propensity to center Western European ways of knowing, reasoning and writing. Here we lean on critical AI to help students apply their critical information literacy skills to AI technologies. Thinking about how to teach students about AI aligns with core equity values at our university, and it harnesses faculty’s natural skepticism toward these tools. This first law—teaching students about AI—offers a bridge between AI enthusiasts and skeptics by grounding our approach to AI in the classroom with familiar and widely agreed-upon equity values and critical approaches.

    The second part of our three laws framework asks what students need to know in order to work with AI ethically and equitably. How should students work with these tools as they become increasingly embedded in the platforms and programs they already use, and as they are integrated into the jobs and careers our students hope to enter? As Kathleen Landy recently asked, “What do we want the students in our academic program[s] to know and be able to do with (or without) generative AI?”

    The “with” part of our framework supports faculty as they begin the work of revising learning outcomes, assignments and assessment materials to include AI use.

    Finally, and perhaps most crucially (and related to the “without” in Landy’s question), what skills and practices do students need to develop without AI, in order to protect their learning, to prevent deskilling and to center their own culturally diverse ways of knowing? Here is a quote from Washington University’s Center for Teaching and Learning:

    “Sometimes students must first learn the basics of a field in order to achieve long-term success, even if they might later use shortcuts when working on more advanced material. We still teach basic mathematics to children, for example, even though as adults we all have access to a calculator on our smartphones. GenAI can also produce false results (aka ‘hallucinations’) and often only a user who understands the fundamental concepts at play can recognize this when it happens.”

    Bots sound authoritative, and because they sound so good, students can feel convinced by them, leading to situations where bots override or displace students’ own thinking; thus, their use may curtail opportunities for students to develop and practice the kinds of thinking that undergird many learning goals. Protecting student learning from AI helps faculty situate their concerns about academic integrity in terms of the curriculum, rather than in terms of detection or policing of student behaviors. It invites faculty to think about how they might redesign assignments to provide spaces for students to do their own thinking.

    Providing and protecting such spaces undoubtedly poses increased challenges for faculty, given the ubiquity of AI tools available to students. But we also know that protecting student learning from easy shortcuts is at the heart of formal education. Consider the planning that goes into determining whether an assessment should be open-book or open-note, take-home or in-class. These decisions are rooted in the third law: What would most protect student learning from the use of shortcuts (e.g., textbooks, access to help) that undermine their learning?

    University websites are awash in resource guides for faculty grappling with new technology. It can be overwhelming for faculty, to say the least, especially given high teaching loads and constraints on faculty time. Our three laws framework provides a scaffold for faculty as they sift through resources on AI and begin the work of redesigning assignments, activities and assessments to address AI. You can see our three laws in action here, in field notes from Jennifer’s efforts to redesign her first-year writing class to address the challenges and potential of AI technology.

    In the spirit of connecting the new with the familiar, we’ll close by reminding readers that while AI technology poses new challenges, these challenges are in some ways not so different from the work of curriculum and assessment design that we regularly undertake when we build our courses. Indeed, faculty have long grappled with the questions raised by our current moment. We’ll leave you with this quote, from a 1991 (!) article by Gail E. Hawisher and Cynthia L. Selfe on the rise of word-processing technology and writing studies:

    “We do not advocate abandoning the use of technology and relying primarily on script and print for our teaching without the aid of word processing and other computer applications such as communication software; nor do we suggest eliminating our descriptions of the positive learning environments that technology can help us to create. Instead, we must try to use our awareness of the discrepancies we have noted as a basis for constructing a more complete image of how technology can be used positively and negatively. We must plan carefully and develop the necessary critical perspectives to help us avoid using computers to advance or promote mediocrity in writing instruction. A balanced and increasingly critical perspective is a starting point: by viewing our classes as sites of both paradox and promise we can construct a mature view of how the use of electronic technology can abet our teaching.”

    Anoshua Chaudhuri is the senior director of the Center for Equity and Excellence in Teaching and Learning and professor of economics at San Francisco State University.

    Jennifer Trainor is a faculty director at the Center for Equity and Excellence in Teaching and Learning and professor of English at San Francisco State University.

    Source link

  • AI is new — the laws that govern it don’t have to be

    AI is new — the laws that govern it don’t have to be

    On Monday, Virginia Governor Glenn Youngkin vetoed House Bill 2094, the High-Risk Artificial Intelligence Developer and Deployer Act. The bill would have set up a broad legal framework for AI, adding restrictions to its development and its expressive outputs that, if enacted, would have put the bill on a direct collision course with the First Amendment.

    This veto is the latest in a number of setbacks to a movement across many states to regulate AI development that originated with a working group put together last year. In February, that group broke down — further indicating upheaval in a once ascendant regulatory push.

    While existing laws may or may not be applied prudently, the emerging trend away from hasty lawmaking and toward more deliberation bodes well for the intertwined future of AI and free speech.

    At the same time, another movement has gained steam. A number of states are turning to old laws, including those prohibiting fraud, forgery, discrimination, and defamation, which have long managed the same purported harms stemming from AI in the context of older technology.

    Gov. Youngkin’s HB 2094 veto statement echoed the notion that existing laws may suffice, stating, “There are many laws currently in place that protect consumers and place responsibilities on companies relating to discriminatory practices, privacy, data use, libel, and more.” FIRE has pointed to these abilities of current law in previous statements, part of a number of AI-related interventions we’ve made as the technology has come to dominate state legislative agendas, including in states like Virginia

    The simple idea that current laws may be sufficient to deal with AI initially eluded the thinking of many lawmakers but now is quickly becoming common sense in a growing number of states. While existing laws may be applied in ways prudent and not, the emerging trend away from hasty lawmaking and toward more deliberation bodes well for the intertwined future of AI and free speech.

    The regulatory landscape

    AI offers the promise of a new era of knowledge generation and expression, and these developments come at a critical juncture as AI development continues to advance towards that vision. Companies are updating their models at a breakneck pace, epitomized by OpenAI’s popular new image generation tool

    Public and political interest, fueled by fascination and fear, may thus continue to intensify over the next two years — a period during which AI, still emerging from its nascent stage, will remain acutely vulnerable to threats of new regulation. Mercatus Center Research Fellow and leading AI policy analyst Dean W. Ball has hypothesized that 2025 and 2026 could represent the last two years to enact the laws that will be in place before AI systems with “qualitatively transformative capabilities” are released.

    With AI’s rapid development and deployment as the backdrop, states have rushed to propose new legal frameworks, hoping to align AI’s coming takeoff with state policy objectives. Last year saw the introduction of around 700 bills related to AI, covering everything from “deepfakes” to the use of AI in elections. This year, that number is already approaching 900-plus.

    Texas’s TRAIGA, the Texas Responsible Artificial Intelligence Governance Act, has been the highest-profile example from this year’s wave of restrictive AI bills. Sponsored by Republican State Rep. Giovanni Capriglione, TRAIGA has been one of several “algorithmic discrimination” bills that would impose liability on developers, deployers, and often distributors of AI systems that may introduce a risk of “algorithmic discrimination.” 

    Other examples include the recently vetoed HB 2094 in Virginia, Assembly Bill A768 in New York, and Legislative Bill 642 in Nebraska. While the bills have several problems, most concerning are their inclusion of a “reasonable care” negligence standard that would hold AI developers and users liable if there is a greater than 50% chance they could have “reasonably” prevented discrimination. 

    Such liability provisions incentivize AI developers to handicap their models to avoid any possibility of offering recommendations that some might deem discriminatory or simply offensive — even if doing so curtails the models’ usefulness or capabilities. The “chill” of these kinds of provisions threatens a broad array of important applications. 

    In Connecticut, for instance, Children’s Hospitals have warned how the vagueness and breadth of such regulations could limit health care providers’ ability to use AI to improve cancer screenings. These bills also compel regular risk reports on the models’ expressive outputs, similar to requirements that were held as unconstitutional under the First Amendment in other contexts by a federal court last year.

    So far, only Colorado has enacted such a law. Its implementation, spearheaded by the statutorily authorized Colorado Artificial Intelligence Impact Task Force, won’t assuage any skeptics. Even Gov. Jared Polis, who conceived the task force and signed the bill, has said it deviates from standard anti-discrimination laws “by regulating the results of AI system use, regardless of intent,” and has encouraged the legislature to “reexamine the concept” as the law is finalized.

    With a mandate to resolve this and other points of tension, the task force has come up almost empty-handed. In its report last month, it reached consensus on only “minor … changes,” while remaining deadlocked on substantive areas such as the law’s equivalent language to TRAIGA on reasonable care.

    The sponsors of TRAIGA reached a similar impasse as it came under intense political scrutiny. Rep. Capriglione responded earlier this month by dropping TRAIGA in favor of a new bill, HB 149. Among HB-149’s provisions, many of which run headlong into protected expression, is a proposed statute that holds “an artificial intelligence system shall not be developed or deployed in a manner that intentionally results in political viewpoint discrimination” or that “intentionally infringes upon a person’s freedom of association or ability to freely express the person’s beliefs or opinions.” 

    But this new language overlooks a landmark Supreme Court ruling just last year that laws in Texas and Florida with similar prohibitions on political discrimination for social media raised significant First Amendment concerns. 

    A more modest alternative

    An approach different from that taken in Colorado and Texas appears to be taking root in Connecticut. Last year, Gov. Ned Lamont signaled he would veto Connecticut Senate Bill 2, a bill similar to the law Colorado passed. In reflecting on his reservations, he noted, “You got to know what you’re regulating and be very strict about it. If it’s, ‘I don’t like algorithms that create biased responses,’ that can go any of a million different ways.” 

    At a press conference at the time of the bill’s consideration, his office suggested existing Connecticut anti-discrimination laws could already apply to AI use in relevant areas like housing, employment, and banking.

    Attempting to solve all theoretical problems of AI, before the contours of its problems become clear, is not only impractical but risks stifling innovation and expression in ways that may be difficult to reverse.

    Scholars Jeffrey Sonnenfeld and co-author Stephen Henriques of Yale’s School of Management expanded on the idea, noting Connecticut’s Unfair Trade Practices Act would seem to cover major AI developers and small “deployers” alike. They argue that a preferable route to new legislation would be for the state attorney general to clarify how existing laws can remedy the harms to consumers that sparked Senate Bill 2 in the first place.

    Connecticut isn’t alone. In California, which often sets the standard for tech law in the United States, two bills — AB 2930, focusing on liability for algorithmic discrimination in the same manner as the Colorado and Texas bills, and SB 1047, focusing on liability for “hazardous capabilities” — both failed. Gov. Gavin Newsom, echoing Lamont, stressed in his veto statement for SB 1047, “Adaptability is critical as we race to regulate a technology still in its infancy.”

    Newsom’s attorney general followed up by issuing extensive guidance on how existing California laws — such as the Unruh Civil Rights Act, California Fair Employment and Housing Act, and California Consumer Credit Reporting Agencies Act — already provide consumer protections for issues that many worry AI will exacerbate, such as consumer deception and unlawful discrimination. 

    New JerseyOregon, and Massachusetts have offered similar guidance, with Massachusetts Attorney General Andrea Joy Campbell noting, “Existing state laws and regulations apply to this emerging technology to the same extent as they apply to any other product or application.” And in Texas, where HB 149 still sits in the legislature, Attorney General Ken Paxton is currently reaching settlements in cases about the misuse of AI products in violation of existing consumer protection law. 

    Addressing problems

    The application of existing laws, to be sure, must comport with the First Amendment’s broad protections. Accordingly, not all conceivable applications will be constitutional. But the core principle remains: states that are hitting the brakes and reflecting on the tools already available give AI developers and users the benefit of operating within established, predictable legal frameworks. 

    And if enforcement of existing laws runs afoul of the First Amendment, there is an ample body of legal precedent to provide guidance. Some might argue that AI poses different questions from prior technology covered by existing laws, but it departs in neither essence or purpose. Properly understood, AI is a communicative tool used to convey ideas, like the typewriter and the computer before it. 

    If there are perceived gaps in existing laws as AI and its uses evolve, legislatures may try targeted fixes. Last year, for example, Utah passed a statute clarifying that generative AI cannot serve as a defense to violations of state tort law — for example, a party cannot claim immunity from liability simply because an AI system “made the violative statement” or “undertook the violative act.” 

    Rather than introducing entirely new layers of liability, this provision clarifies accountability under existing statutes. 

    Other ideas floated include “regulatory sandboxes,” a voluntary way for private firms to test applications of AI technology in collaboration with the state in exchange for certain regulatory mitigation, the aim being to offer a learning environment for policymakers to study how law and AI interact over time, with emerging issues addressed by a regulatory scalpel rather than a hatchet. 

    This reflects an important point. The trajectory of AI is largely unknowable, as is how rules imposed now will affect this early-stage technology down the line. Well-meaning laws to prevent discrimination this year could preclude broad swathes of significant expressive activity in coming years.

    FIRE does not endorse any particular course of action, but this is perhaps the most compelling reason lawmakers should consider the more restrained approach outlined above. Attempting to solve all theoretical problems of AI before the contours of problems become clear is not only impractical, but risks stifling innovation and expression in ways that may be difficult to reverse. History also teaches that many of the initial worries will never materialize

    As President Calvin Coolidge observed, “If you see 10 troubles coming down the road, you can be sure that nine will run into the ditch before they reach you and you have to battle with only one of them.” We can address those that do materialize in a targeted manner as the full scope of the problems become clear.

    The wisest course of action may be patience. Let existing laws do their job and avoid premature restrictions. Like weary parents, lawmakers should take a breath — and maybe a vacation — while giving AI time to grow up a little.

    Source link

  • Trump Signs Executive Order on Enforcement of Immigration Laws, Potentially Leading to Increased Worksite Enforcement Action

    Trump Signs Executive Order on Enforcement of Immigration Laws, Potentially Leading to Increased Worksite Enforcement Action

    by CUPA-HR | January 29, 2025

    Along with several immigration-related executive orders and actions issued on Inauguration Day, President Trump signed an executive order titled “Protecting the American People Against Invasion.” The EO sets several directives for U.S. Immigration and Customs Enforcement (ICE) and U.S. Citizenship and Immigration Services (USCIS) to enforce immigration law against immigrants without permanent legal status in the U.S. and could implicate employers the government deems as “facilitating” the presence of such individuals.

    Sections 4 and 5 of the EO establish civil and criminal enforcement priorities for relevant federal agencies. Specifically, the EO directs the secretary of Homeland Security to enable ICE and USCIS to set priorities for their agencies that would ensure successful enforcement of final orders of removal. Additionally, Section 8 of the EO directs increased enforcement action in the form of civil fines and penalties. The EO directs the secretary of Homeland Security to ensure assessment and collection of all fines and penalties from individuals unlawfully present in the U.S. and, notably, those who facilitate such individuals’ presence in the U.S.

    Depending on how the agencies respond to this order, these three sections of the EO could lead to an uptick in worksite enforcement action. As a result of this EO, agencies could take increased enforcement action for employment-related immigration law, which could lead to agency actions such as Form I-9 audits and potential investigations and worksite visits related to immigration compliance. Employers who are not in compliance with federal immigration laws could be considered as entities that potentially “facilitate” the presence of immigrants without permanent legal status, which could lead to significant fines and other penalties for the employers.

    Next Steps for HR Leaders

    CUPA-HR has always worked to help you ensure that your institution’s Form I-9 processes are in compliance with federal requirements, and we’ve partnered with USCIS for many years to provide periodic guidance, support and resources. We also understand that it is sometimes a challenge to ensure total compliance for large, sprawling campuses and that some of you have employees at worksites across your state, the country and the globe. Through speeches and actions like this executive order, the Trump administration has made it clear that they intend to focus enforcement efforts on immigrants without permanent legal status and businesses employing them. As noted above, it is possible that there could be I-9 audits and site visits to ensure compliance. Penalties for noncompliance could include very large fines and loss of federal funding.

    In light of this EO, it is vital for institutions to review their compliance with immigration laws regarding employment eligibility and work authorization. There are several questions HR leaders should ask themselves when reviewing compliance:

    • If you were notified tomorrow that your institution’s Form I-9 records were going to be audited in the coming weeks, where would your institution be most vulnerable?
    • What actions do you need to take today to address any potential vulnerabilities?
    • Do your presidents, provosts and other campus leaders understand and appreciate the magnitude of this potential challenge?
    • What changes do you need to make to your institution’s hiring and onboarding practices now to ensure compliance moving forward?

    CUPA-HR will continue to monitor for any additional updates related to the Form I-9 and other hiring processes related to work authorization. If you need additional guidance or resources, please review the CUPA-HR I-9/E-Verify Toolkit.



    Source link