Tag: dont

  • No thank you, AI, I am not interested. You don’t get my data. #shorts

    No thank you, AI, I am not interested. You don’t get my data. #shorts

    No thank you, AI, I am not interested. You don’t get my data. #shorts

    Source link

  • Don’t let Texas criminalize free political speech in the name of AI regulation

    Don’t let Texas criminalize free political speech in the name of AI regulation

    This essay was originally published by the Austin American-Statesman on May 2, 2025.


    Texans aren’t exactly shy about speaking their minds — whether it’s at city hall, in the town square, or all over social media. But a slate of bills now moving through the Texas Legislature threatens to make that proud tradition a criminal offense.

    In the name of regulating artificial intelligence, lawmakers are proposing bills that could turn political memes, commentary and satire into crimes.

    Senate Bills 893 and 228, and House Bills 366 and 556, might be attempting to protect election integrity, but these bills actually impose sweeping restrictions that could silence ordinary Texans just trying to express their opinions.

    Take SB 893 and its companion HB 2795. These would make it a crime to create and share AI-generated images, audio recordings, or videos if done with the intent to “deceive” and “influence the result of an election.” The bill offers a limited safeguard: If you want to share any images covered by the bill, you must edit them to add a government-mandated warning label.

    But the bills never define what counts as “deceptive,” handing prosecutors a blank check to decide what speech crosses the line. That’s a recipe for selective enforcement and criminalizing unpopular opinions. And SB 893 has already passed the Senate.

    Vague laws and open-ended definitions shouldn’t dictate what Texans can say, how they can say it, or which tools they’re allowed to use.

    HB 366, which just passed the House, goes even further. It would require a disclaimer on any political ad that contains “altered media,” even when the content isn’t misleading. With the provisions applying to anyone spending at least $100 on political advertising, which is easily the amount a person could spend to boost a social media post or to print some flyers, a private citizen could be subject to the law.

    Once this threshold is met, an AI-generated meme, a five-second clip on social media, or a goofy Photoshop that gives the opponent a giant cartoon head would all suddenly need a legal warning label. No exceptions for satire, parody or commentary are included. If it didn’t happen in real life, you’re legally obligated to slap a disclaimer on it.

    HB 556 and SB 228 take a similarly broad approach, treating all generative AI as suspect and criminalizing creative political expression.

    These proposals aren’t just overkill, they’re unconstitutional. Courts have long held that parody, satire and even sharp political attacks are protected speech. Requiring Texans to add disclaimers to their opinions simply because they used modern tools to express them is not transparency. It’s compelled speech.

    Besides, Texas already has laws on the books to address defamation, fraud and election interference. What these bills do is expand government control over how Texans express themselves while turning political expression into a legal minefield.

    Fighting deception at the ballot box shouldn’t mean criminalizing creativity or chilling free speech online. Texans shouldn’t need a lawyer to know whether they can post a meme they made on social media or make a joke about a candidate.

    Political life in Texas has been known to be colorful, rowdy and fiercely independent — and that’s how it should stay. Vague laws and open-ended definitions shouldn’t dictate what Texans can say, how they can say it, or which tools they’re allowed to use.

    The Texas Legislature should scrap these overbroad AI bills and defend the Lone Star state’s real legacy: fearless, unapologetic free speech.

    Source link

  • Don’t Overlook Alumni as Asset for Advocacy (opinion)

    Don’t Overlook Alumni as Asset for Advocacy (opinion)

    With research contracts, cost recovery and student financial aid totaling billions of dollars on the line, many universities have called upon powerhouse external lobbying firms to defend against federal funding cuts and make the case for the public good that flows from higher education. Engaging external government relations experts can bring important perspective and leverage in this critical period, but this approach may not be scalable or sustainable across the nearly 550 research universities in large and small communities across the country.

    Fortunately, campuses have their own powerful asset for advocacy: alumni. Graduates know firsthand the benefits of higher education in their lives, professions and communities, and they can also give valuable feedback as campuses work to meet the challenges of this moment and become even better. The National Survey of College Graduates estimates that 72 million individuals hold at least a bachelor’s degree. Engaged well, alumni can be a force multiplier.

    Alumni often get attention in their role as donors. They will receive, on average, more than 90 email messages from their alma mater this year, many asking them to reflect on the value of their college experience and pay it forward. The most generous donors will be celebrated at events or visited personally by campus leaders. Millions and sometimes billions of dollars will be raised to advance campus missions.

    As generous as alumni donors may be, the effectiveness of their philanthropic support is linked to the even greater investments states and the federal government make in higher education. University leaders in fundraising and beyond have an obligation to provide alumni with candid information about the potential impacts of looming generational policy and funding shifts, along with opportunities to support their campus as advocates.

    In a crisis, information and attention necessarily flow first to on-campus constituents. Crisis communications and management plans may initially overlook alumni or underestimate the compelling role that they can play with both external and internal stakeholders. While most alumni are not on the campus, they are of the campus in deep and meaningful ways. And, unlike the handful of ultrawealthy alumni who have weighed in to the detriment of their Ivy League campuses, a broad group of alumni can bring practical wisdom and a voice of reason to challenging issues.

    Campus leaders now preparing for a long period of disruption should assess alumni engagement as part of this planning and gather their teams to consider:

    • How might alumni and development staff work with strategic communications, government relations staff and academic leaders to shape university messaging and advocacy?
    • What facts about policy and funding challenges do alumni need to understand in a media environment filled with misinformation?
    • How might alumni perspectives inform campus discourse about challenges to the institution’s values and academic freedom?
    • How might existing alumni programming provide opportunities for information-sharing between campus leaders, academic leaders and alumni?
    • How are campuses acknowledging and supporting alumni who are directly affected by changes in the federal workforce and economic disruption?

    This is a critical time for campus leaders to build bridges. Alumni can be a huge asset in this work. As degree holders, donors, professionals and citizens, engaged alumni know the specific value of their alma mater and of higher education broadly. They have stakes, authenticity and social capital, and they deserve the opportunity to add their voices.

    Lisa Akchin, senior counsel at RW Jones Agency and founder of On Purpose LLC, previously served as associate vice president for engagement and chief marketing officer at University of Maryland Baltimore County.

    Source link

  • So who says we don’t have post qualification admissions already?

    So who says we don’t have post qualification admissions already?

    In February 2022, then Secretary of State for Education Nadhim Zahawi told Parliament the Johnson government’s decision on post-qualification admissions.

    Clear as a welcome school bell, he stated “we will not be reforming the admissions system to a system of PQA at this time”.

    But who says that we don’t already have PQA?

    Admissions reform by stealth

    The “Decline My Place” button introduced by UCAS instead of Adjustment basically introduced PQA anyway. The only reason we haven’t noticed is that we were not, then, very focused on undergraduate home numbers. How things change.

    Let’s think about JCQ results day 2025. Let’s say I work at an institution in the Russell Group with good recruitment opportunities for UG home and some uncertainties (I enjoy understatement) about postgraduate international numbers. And let’s say I decide to make hundreds more spaces available than previously planned earlier in the cycle.

    But let’s also say that my colleagues further north, west and east do the same. I have a wonderfully smooth confirmation, accepting lots of well qualified and soon-to-be happy young people. I arrive on results day less stressed and tired than usual which is just as well because all hell breaks loose.

    From 8:00am until 1:00pm I am frantically confirming Clearing places and, I’m hitting refresh on our numbers forecast every 5 minutes. My blood pressure is rising as is my cake consumption (the renewable energy of choice for any self-respecting Admissions Office). I am desperately trying to work out if our gains are ahead of our losses.

    That’s because hundreds (more?) of our nurtured, valued and cultivated unconditional firm offer-holders have hit a button at UCAS and declined their place to go elsewhere. On top of this, for the first time in 2025, some who are still conditional have released themselves too. Fine, I hear you say – If you haven’t processed a decision you deserve to lose the student. But several of these students are still awaiting results (excluded from the requirement that Decline My Place is only for those with a complete set of Level 3 results).

    You may well ask where the problem is here.

    A better offer

    Well, these particular students are from schools and colleges where we have a partnership. Several have been on long-term aspiration-raising enrichment programmes with us for over two years. We have invested all we can in their (everyone must have one) journey. It’s just that they’ve had “a better offer”.

    This may be an offer from an institution in London where “our” student has been offered a big financial incentive, and which grew its Clearing intake from zero to 200 in two years. An offer from a delightful campus in the Midlands where “our” student will be very happy and which would not have been an option when only 45 Clearing places were available – but now there are 500. An offer from an exciting and vibrant institution in the north which can take “our” student for Economics – a real surprise as spaces are not often available for a subject like that, but then this university grew its Clearing intake from 200 to 885 over the last two cycles.

    These are all real examples from last year. Companies may well have to say that past performance is no guarantee of future results, but we wouldn’t select on the basis of predicted grades if it wasn’t to some degree – now would we?

    Personally I have always been in favour of PQA in theory. It is just that the jeopardy I enjoy about admissions doesn’t quite extend to the levels of uncertainty I predict for the few days after 14 August 2025. I wonder how many members of the UCAS Board and how many vice chancellors realise that there is, in a theoretical model that may very well be tested this summer, every possibility that every single firm accept that we have all secured, conditional or unconditional, melts on or before Results Day.

    They can all, with absolutely no controls (apart from a quick call to UCAS if you are still conditional) decline their place and go to the pub to celebrate “trading up”. If that isn’t PQA what is? I need another cake.

    Source link

  • Half of Colleges Don’t Grant Students Access to Gen AI Tools

    Half of Colleges Don’t Grant Students Access to Gen AI Tools

    Transformative. Disruptive. Game-changing. That’s how many experts continue to refer, without hyperbole, to generative AI’s impact on higher education. Yet more than two years after generative AI went mainstream, half of chief technology officers report that their college or university isn’t granting students institutional access to generative AI tools, which are often gratis and more sophisticated and secure than what’s otherwise available to students. That’s according to Inside Higher Ed’s forthcoming annual Survey of Campus Chief Technology/Information Officers with Hanover Research.

    There remains some significant—and important—skepticism in academe about generative AI’s potential for pedagogical (and societal) good. But with a growing number of institutions launching key AI initiatives underpinned by student access to generative AI tools, and increasing student and employer expectations around AI literacy, student generative AI access has mounting implications for digital equity and workforce readiness. And according to Inside Higher Ed’s survey, cost is the No. 1 barrier to granting access, ahead of lack of need and even ethical concerns.

    Ravi Pendse, who reviewed the findings for Inside Higher Ed and serves as vice president for information technology and chief information officer at the University of Michigan, a leader in granting students access to generative AI tools, wasn’t surprised by the results. But he noted that AI prompting costs, typically measured in units called tokens, have fallen sharply over time. Generative AI models, including open-source large language models, have proliferated over the same period, meaning that institutions have increasing—and increasingly less expensive—options for providing students access to tools.

    ‘Paralyzed’ by Costs

    “Sometimes we get paralyzed by, ‘I don’t have resources, or there’s no way I can do this,’ and that’s where people need to just lean in,” Pendse said. “I want to implore all leaders and colleagues to step up and focus on what’s possible, and let human creativity get us there.”

    According to the survey—which asked 108 CTOs at two- and four-year colleges, public and private nonprofit, much more about AI, digital transformation, online learning and other key topics—institutional approaches to student generative AI access vary. (The full survey findings will be released next month.)

    Some 27 percent of CTOs said their college or university offers students generative AI access through an institutionwide license, with CTOs at public nonprofit institutions especially likely to say this. Another 13 percent of all CTOs reported student access to generative AI tools is limited to specific programs or departments, with this subgroup made up entirely of private nonprofit CTOs. And 5 percent of the sample reported that students at their institution have access to a custom-built generative AI tool.

    Among community college CTOs specifically (n=22), 36 percent said that students have access to generative AI tools, all through an institutionwide license.

    Roughly half of institutions represented do not offer student access to generative AI tools. Some 36 percent of CTOs reported that their college doesn’t offer access but is considering doing so, while 15 percent said that their institution doesn’t offer access and is not considering it.

    Of those CTOs who reported some kind of student access to generative AI and answered a corresponding question about how they pay for it (n=45), half said associated costs are covered by their central IT budget; most of these are public institution CTOs. Another quarter said there are no associated costs. Most of the rest of this group indicated that funding comes from individual departments. Almost no one said costs are passed on to students, such as through fees.

    Among CTOs from institutions that don’t provide student access who responded to a corresponding question about why not (n=51), the top-cited barrier from a list of possibilities was costs. Ethical concerns, such as those around potential misuse and academic integrity, factored in, as well, followed by concerns about data privacy and/or security. Fewer said there is no need or insufficient technical expertise to manage implementation.

    “I very, very strongly feel that every student that graduates from any institution of higher education must have at least one core course in AI, or significant exposure to these tools. And if we’re not doing that, I believe that we are doing a disservice to our students,” Pendse said. “As a nation we need to be prepared, which means we as educators have a responsibility. We need to step up and not get bogged down by cost, because there are always solutions available. Michigan welcomes the opportunity to partner with any institution out there and provide them guidance, all our lessons learned.”

    The Case for Institutional Access

    But do students really need their institutions to provide access to generative AI tools, given that rapid advances in AI technology also have led to fewer limitations on free, individual-level access to products such as ChatGPT, which many students have and can continue to use on their own?

    Experts such as Sidney Fernandes, vice president and CIO of the University of South Florida, which offers all students, faculty and staff access to Microsoft Copilot, say yes. One reason: privacy and security concerns. USF users of Copilot Chat use the tool in a secure, encrypted environment to maintain data privacy. And the data users share within USF’s Copilot enterprise functions—which support workflows and innovation—also remains within the institution and is not used to train AI models.

    There’s no guarantee, of course, that students with secure, institutional generative AI accounts will use only them. But at USF and beyond, account rollouts are typically accompanied by basic training efforts—another plus for AI literacy and engagement.

    “When we offer guidance on how to use the profiles, we’ve said, ‘If you’re using the commercially available chat bots, those are the equivalent of being on social media. Anything you post there could be used for whatever reason, so be very careful,” Fernandes told Inside Higher Ed.

    In Inside Higher Ed’s survey, CTOs who reported student access to generative AI tools by some means were no more likely than the group over all to feel highly confident in their institution’s cybersecurity practices—although CTOs as a group may have reason to worry about students and cybersecurity generally: Just 26 percent reported their institution requires student training in cybersecurity.

    Colleges can also grant students access to tools that are much more powerful than freely available and otherwise prompt-limited chat bots, as well as tools that are more integrated into other university platforms and resources. Michigan, for instance, offers students access to an AI assistant and another conversational AI tool, plus a separate tool that can be trained on a custom dataset. Access to a more advanced and flexible tool kit for those who require full control over their AI environments and models is available by request.

    Responsive AI and the Role of Big Tech

    Another reason for institutions to lead on student access to generative AI tools is cultural responsiveness, as AI tools reflect the data they’re trained on, and human biases often are baked into that data. Muhsinah Morris, director of Metaverse programs at Morehouse College, which has various culturally responsive AI initiatives—such as those involving AI tutors that look like professors—said it “makes a lot of sense to not put your eggs in one basket and say that basket is going to be the one that you carry … But at the end of the day, it’s all about student wellness, 24-7, personalized support, making sure that students feel seen and heard in this landscape and developing skills in real time that are going to make them better.”

    The stakes of generative AI in education, for digital equity and beyond, also implicate big tech companies whose generative AI models and bottom lines benefit from the knowledge flowing from colleges and universities. Big tech could therefore be doing much more to partner on free generative AI access with colleges and universities, and not just on the “2.0” and “3.0” models, Morris said.

    “They have a responsibility to also pour back into the world,” she added. “They are not off the hook. As a matter of fact, I’m calling them to the carpet.”

    Jenay Robert, senior researcher at Educause, noted that the organization’s 2025 AI Landscape Study: Into the Digital AI Divide found that more institutions are licensing AI tools than creating their own, across a variety of capabilities. She said digital equity is “certainly one of the biggest concerns when it comes to students’ access to generative AI tools.” Some 83 percent of respondents in that study said they were concerned about widening the digital divide as an AI-related risk. Yet most respondents were also optimistic about AI improving access to and accessibility of educational materials.

    Of course, Robert added, “AI tools won’t contribute to any of these improvements if students can’t access the tools.” Respondents to the Educause landscape study from larger institutions were more likely those from smaller ones to report that their AI-related strategic planning includes increasing access to AI tools.

    Inside Higher Ed’s survey also reveals a link between institution size and access, with student access to generative AI tools through an institutionwide license, especially, increasing with student population. But just 11 percent of CTOs reported that their institution has a comprehensive AI strategy.

    Still, Robert cautioned that “access is only part of the equation here. If we want to avoid widening the digital equity divide, we also have to help students learn how to use the tools they have access to.”

    In a telling data point from Educause’s 2025 Students and Technology Report, more than half of students reported that most or all of their instructors prohibit the use of generative AI.

    Arizona State University, like Michigan, collaborated early on with OpenAI, but it has multiple vendor partners and grants student access to generative AI tools through an institutionwide license, through certain programs and custom-built tools. ASU closely follows generative AI consumption in a way that allows it to meet varied needs across the university in a cost-effective manner, as “the cost of one [generative AI] model versus another can vary dramatically,” said Kyle Bowen, deputy CIO.

    “A large percentage of students make use of a moderate level of capability, but some students and faculty make use of more advanced capability,” he said. “So everybody having everything may not make sense. It may not be very cost-sustainable. Part of what we have to look at is what we would describe as consumption-based modeling—meaning we are putting in place the things that people need and will consume, not trying to speculate what the future will look like.”

    That’s what even institutions with established student access are “wrestling with,” Bowen continued. “How do we provide that universal level of AI capability today while recognizing that that will evolve and change, and we have to be ready to have technology for the future, as well, right?”

    Source link

  • Don’t Give Trump Student, Faculty Names, Nationalities

    Don’t Give Trump Student, Faculty Names, Nationalities

    The American Association of University Professors is warning college and university lawyers not to provide the U.S. Education Department’s Office for Civil Rights the names and nationalities of students or faculty involved in alleged Title VI violations.

    The AAUP’s letter comes after The Washington Post reported last week that Education Department higher-ups directed OCR attorneys investigating universities’ responses to reports of antisemitism to “collect the names and nationalities of students who might have harassed Jewish students or faculty.” The department didn’t respond to Inside Higher Ed’s requests for comment Thursday.

    In a 13-page Wednesday letter to college and university general counsels’ offices, four law professors serving as AAUP counsel wrote that higher education institutions “are under no legal compulsion to comply.” The AAUP counsel further urged them “not to comply, given the serious risks and harms of doing so”—noting that the Trump administration is revoking visas and detaining noncitizens over “students’ and faculty members’ speech and expressive activities.” The administration has targeted international students and other scholars suspected of participating in pro-Palestinian advocacy.

    Title VI of the federal Civil Rights Act of 1964 prohibits discrimination based on, among other things, shared ancestry, which includes antisemitism. But the AAUP counsel wrote that “Title VI does not require higher education institutions to provide the personally identifiable information of individual students or faculty members so that the administration can carry out further deportations.”

    And Title VI investigations, they wrote, “are not intended to determine whether the students and faculty who attend these schools have violated any civil rights laws, let alone discipline or punish students or faculty.” They wrote that investigations are instead “intended to determine whether the institution itself has discriminated.”

    Providing this information to the federal government may violate the First Amendment rights of those targeted, plus the Family Educational Rights and Privacy Act and state laws, they wrote, adding that this information shouldn’t be turned over without “clear justification for the release of specific information related to a legitimate purpose in the context of a particular active investigation.”

    Source link

  • AI is new — the laws that govern it don’t have to be

    AI is new — the laws that govern it don’t have to be

    On Monday, Virginia Governor Glenn Youngkin vetoed House Bill 2094, the High-Risk Artificial Intelligence Developer and Deployer Act. The bill would have set up a broad legal framework for AI, adding restrictions to its development and its expressive outputs that, if enacted, would have put the bill on a direct collision course with the First Amendment.

    This veto is the latest in a number of setbacks to a movement across many states to regulate AI development that originated with a working group put together last year. In February, that group broke down — further indicating upheaval in a once ascendant regulatory push.

    While existing laws may or may not be applied prudently, the emerging trend away from hasty lawmaking and toward more deliberation bodes well for the intertwined future of AI and free speech.

    At the same time, another movement has gained steam. A number of states are turning to old laws, including those prohibiting fraud, forgery, discrimination, and defamation, which have long managed the same purported harms stemming from AI in the context of older technology.

    Gov. Youngkin’s HB 2094 veto statement echoed the notion that existing laws may suffice, stating, “There are many laws currently in place that protect consumers and place responsibilities on companies relating to discriminatory practices, privacy, data use, libel, and more.” FIRE has pointed to these abilities of current law in previous statements, part of a number of AI-related interventions we’ve made as the technology has come to dominate state legislative agendas, including in states like Virginia

    The simple idea that current laws may be sufficient to deal with AI initially eluded the thinking of many lawmakers but now is quickly becoming common sense in a growing number of states. While existing laws may be applied in ways prudent and not, the emerging trend away from hasty lawmaking and toward more deliberation bodes well for the intertwined future of AI and free speech.

    The regulatory landscape

    AI offers the promise of a new era of knowledge generation and expression, and these developments come at a critical juncture as AI development continues to advance towards that vision. Companies are updating their models at a breakneck pace, epitomized by OpenAI’s popular new image generation tool

    Public and political interest, fueled by fascination and fear, may thus continue to intensify over the next two years — a period during which AI, still emerging from its nascent stage, will remain acutely vulnerable to threats of new regulation. Mercatus Center Research Fellow and leading AI policy analyst Dean W. Ball has hypothesized that 2025 and 2026 could represent the last two years to enact the laws that will be in place before AI systems with “qualitatively transformative capabilities” are released.

    With AI’s rapid development and deployment as the backdrop, states have rushed to propose new legal frameworks, hoping to align AI’s coming takeoff with state policy objectives. Last year saw the introduction of around 700 bills related to AI, covering everything from “deepfakes” to the use of AI in elections. This year, that number is already approaching 900-plus.

    Texas’s TRAIGA, the Texas Responsible Artificial Intelligence Governance Act, has been the highest-profile example from this year’s wave of restrictive AI bills. Sponsored by Republican State Rep. Giovanni Capriglione, TRAIGA has been one of several “algorithmic discrimination” bills that would impose liability on developers, deployers, and often distributors of AI systems that may introduce a risk of “algorithmic discrimination.” 

    Other examples include the recently vetoed HB 2094 in Virginia, Assembly Bill A768 in New York, and Legislative Bill 642 in Nebraska. While the bills have several problems, most concerning are their inclusion of a “reasonable care” negligence standard that would hold AI developers and users liable if there is a greater than 50% chance they could have “reasonably” prevented discrimination. 

    Such liability provisions incentivize AI developers to handicap their models to avoid any possibility of offering recommendations that some might deem discriminatory or simply offensive — even if doing so curtails the models’ usefulness or capabilities. The “chill” of these kinds of provisions threatens a broad array of important applications. 

    In Connecticut, for instance, Children’s Hospitals have warned how the vagueness and breadth of such regulations could limit health care providers’ ability to use AI to improve cancer screenings. These bills also compel regular risk reports on the models’ expressive outputs, similar to requirements that were held as unconstitutional under the First Amendment in other contexts by a federal court last year.

    So far, only Colorado has enacted such a law. Its implementation, spearheaded by the statutorily authorized Colorado Artificial Intelligence Impact Task Force, won’t assuage any skeptics. Even Gov. Jared Polis, who conceived the task force and signed the bill, has said it deviates from standard anti-discrimination laws “by regulating the results of AI system use, regardless of intent,” and has encouraged the legislature to “reexamine the concept” as the law is finalized.

    With a mandate to resolve this and other points of tension, the task force has come up almost empty-handed. In its report last month, it reached consensus on only “minor … changes,” while remaining deadlocked on substantive areas such as the law’s equivalent language to TRAIGA on reasonable care.

    The sponsors of TRAIGA reached a similar impasse as it came under intense political scrutiny. Rep. Capriglione responded earlier this month by dropping TRAIGA in favor of a new bill, HB 149. Among HB-149’s provisions, many of which run headlong into protected expression, is a proposed statute that holds “an artificial intelligence system shall not be developed or deployed in a manner that intentionally results in political viewpoint discrimination” or that “intentionally infringes upon a person’s freedom of association or ability to freely express the person’s beliefs or opinions.” 

    But this new language overlooks a landmark Supreme Court ruling just last year that laws in Texas and Florida with similar prohibitions on political discrimination for social media raised significant First Amendment concerns. 

    A more modest alternative

    An approach different from that taken in Colorado and Texas appears to be taking root in Connecticut. Last year, Gov. Ned Lamont signaled he would veto Connecticut Senate Bill 2, a bill similar to the law Colorado passed. In reflecting on his reservations, he noted, “You got to know what you’re regulating and be very strict about it. If it’s, ‘I don’t like algorithms that create biased responses,’ that can go any of a million different ways.” 

    At a press conference at the time of the bill’s consideration, his office suggested existing Connecticut anti-discrimination laws could already apply to AI use in relevant areas like housing, employment, and banking.

    Attempting to solve all theoretical problems of AI, before the contours of its problems become clear, is not only impractical but risks stifling innovation and expression in ways that may be difficult to reverse.

    Scholars Jeffrey Sonnenfeld and co-author Stephen Henriques of Yale’s School of Management expanded on the idea, noting Connecticut’s Unfair Trade Practices Act would seem to cover major AI developers and small “deployers” alike. They argue that a preferable route to new legislation would be for the state attorney general to clarify how existing laws can remedy the harms to consumers that sparked Senate Bill 2 in the first place.

    Connecticut isn’t alone. In California, which often sets the standard for tech law in the United States, two bills — AB 2930, focusing on liability for algorithmic discrimination in the same manner as the Colorado and Texas bills, and SB 1047, focusing on liability for “hazardous capabilities” — both failed. Gov. Gavin Newsom, echoing Lamont, stressed in his veto statement for SB 1047, “Adaptability is critical as we race to regulate a technology still in its infancy.”

    Newsom’s attorney general followed up by issuing extensive guidance on how existing California laws — such as the Unruh Civil Rights Act, California Fair Employment and Housing Act, and California Consumer Credit Reporting Agencies Act — already provide consumer protections for issues that many worry AI will exacerbate, such as consumer deception and unlawful discrimination. 

    New JerseyOregon, and Massachusetts have offered similar guidance, with Massachusetts Attorney General Andrea Joy Campbell noting, “Existing state laws and regulations apply to this emerging technology to the same extent as they apply to any other product or application.” And in Texas, where HB 149 still sits in the legislature, Attorney General Ken Paxton is currently reaching settlements in cases about the misuse of AI products in violation of existing consumer protection law. 

    Addressing problems

    The application of existing laws, to be sure, must comport with the First Amendment’s broad protections. Accordingly, not all conceivable applications will be constitutional. But the core principle remains: states that are hitting the brakes and reflecting on the tools already available give AI developers and users the benefit of operating within established, predictable legal frameworks. 

    And if enforcement of existing laws runs afoul of the First Amendment, there is an ample body of legal precedent to provide guidance. Some might argue that AI poses different questions from prior technology covered by existing laws, but it departs in neither essence or purpose. Properly understood, AI is a communicative tool used to convey ideas, like the typewriter and the computer before it. 

    If there are perceived gaps in existing laws as AI and its uses evolve, legislatures may try targeted fixes. Last year, for example, Utah passed a statute clarifying that generative AI cannot serve as a defense to violations of state tort law — for example, a party cannot claim immunity from liability simply because an AI system “made the violative statement” or “undertook the violative act.” 

    Rather than introducing entirely new layers of liability, this provision clarifies accountability under existing statutes. 

    Other ideas floated include “regulatory sandboxes,” a voluntary way for private firms to test applications of AI technology in collaboration with the state in exchange for certain regulatory mitigation, the aim being to offer a learning environment for policymakers to study how law and AI interact over time, with emerging issues addressed by a regulatory scalpel rather than a hatchet. 

    This reflects an important point. The trajectory of AI is largely unknowable, as is how rules imposed now will affect this early-stage technology down the line. Well-meaning laws to prevent discrimination this year could preclude broad swathes of significant expressive activity in coming years.

    FIRE does not endorse any particular course of action, but this is perhaps the most compelling reason lawmakers should consider the more restrained approach outlined above. Attempting to solve all theoretical problems of AI before the contours of problems become clear is not only impractical, but risks stifling innovation and expression in ways that may be difficult to reverse. History also teaches that many of the initial worries will never materialize

    As President Calvin Coolidge observed, “If you see 10 troubles coming down the road, you can be sure that nine will run into the ditch before they reach you and you have to battle with only one of them.” We can address those that do materialize in a targeted manner as the full scope of the problems become clear.

    The wisest course of action may be patience. Let existing laws do their job and avoid premature restrictions. Like weary parents, lawmakers should take a breath — and maybe a vacation — while giving AI time to grow up a little.

    Source link

  • Bret Stephens Don’t Know Higher Education

    Bret Stephens Don’t Know Higher Education

    What I want to know is why The New York Times lets opinion columnist Bret Stephens lie about higher education institutions.

    I understand this is a strong charge, and perhaps it’s unfair. Maybe Stephens is merely uninformed and parroting bad information.

    I’m thinking these things because we recently had the rare occasion of a pundit (Stephens) being challenged in real time by two experts (Tressie McMillan Cottom and M. Gessen) in the form of a three-way conversation printed under the headline “‘It Is Facing a Campaign of Annihilation’: Three Columnists on Trump’s War Against Academia.”

    The conversation is moderated by Patrick Healey, another Times journalist, who gives Stephens the first word on the question “What went wrong with higher ed? How did colleges become such easy pickings?”

    Stephens hearkens to the infamous Yale Halloween incident from 2015, when students committed the grave error of speaking intemperately to university administrators about a communication that seemed to authorize racially insensitive Halloween costumes over the objections of students.

    Stephens wonders why these students weren’t expelled or at least suspended, justifying a crackdown for what may have been a break in decorum but was undeniably the exercise of free speech. Stephens ostensibly is against the threats of the Trump administration against Columbia University and others, and yet here he is essentially authorizing the administration rationale of punishing institutions that are not sufficiently punitive toward protesting students.

    The voice of reason appears in the form of Cottom, both an active professor at the University of North Carolina and a sociologist who studies higher education. In the words of Kevin Carey, “Reading Tressie McMillan Cottom debate Bret Stephens on higher education is like watching Steph Curry play H.O.R.S.E. against a barely-sentient lump of gravel.”

    Cottom counters with lived experience over Stephens’s fever dream: “I have taught the most quintessentially tense courses my entire academic career. My course names often have the words race, class and gender in them. I do this as a Black woman. I have never had a problem with students refusing to have debates. It could be that I am a uniquely gifted pedagogue but I reject that idea.”

    This becomes a pattern throughout the exchanges, where Stephens makes something up and then Cottom and/or Gessen knock it down. Later on, Stephens goes on an uninformed rant about the lack of value of degrees with the word “studies” in them before going on to extol the virtues of humanistic study in the spirit of Matthew Arnold: “It means academic rigor, it means the contestation of ideas, it means a spirit of inquiry, curiosity, questioning and skepticism. Outside of a few colleges and universities, I’m not sure that kind of education is being offered very widely.”

    That Stephens is extolling the virtues of rigorous thought and questioning while parroting ill-informed tropes about higher education does not occur to him. Cottom again corrects his misapprehension with verifiable data: “It is worth pointing out that data on labor market returns really challenge the well-worn idea that such degrees are worthless. We love the joke about your barista having a liberal arts degree, but most of the softness among those degree-holders disappears when you look at state-level data and not just starting salaries after graduation.”

    Cottom goes on to acknowledge that there are some problems with the kinds of institutions she wrote about in Lower Ed: The Troubling Rise of the For-Profit College in the New Economy, after which Stephens jumps in with my favorite nonsense of the entire deal before being again, corrected—more gently than he deserves—by Cottom:

    Stephens: I’d say the lowest-quality institutions created since the 1990s have names like Columbia and Berkeley—these are essentially factories of Maoist cadres taught by professors whose political views ranged almost exclusively from the left to the far left.

    Cottom: I would counter, Bret, that the lowest-quality institutions are the for-profit colleges created as paradigmatic economic theories of exchange value that churned out millions of students in “career ready” fields who found it hard to get a job worth the debt—colleges not unlike the one that our current dear leader once ran as a purely economic enterprise.

    It is worth pausing here to consider how untethered Stephens is from the truth with saying the Columbia and Berkeley are “essentially factories of Maoist cadres.” One would think that if this were the case, they would be overwhelmingly churning out graduates in those dubious “studies” majors.

    Let’s go to the data.

    Top majors at Columbia: political science, economics, computer science, financial economics

    Top majors at Cal: computer science, economics, cellular biology, computer and information sciences, engineering

    The wokeness … it burns! Actually … it’s nonexistent.

    I don’t know if Stephens has convinced himself of a fantasy based on a selective accounting of what’s happening on campus, promulgated by his center-right anti-woke fellow travelers, or if he is simply a liar, but either way, he is demonstrably out of touch with reality.

    Stephens consistently authorizes the “logic” of the authoritarian, even if he disagrees with the specifics of the punishment. The idea that he would claim the mantle of the protector of rights is an irony beyond understanding.

    Stephens concludes, “When diffident liberal administrators fail to confront the far left, the winners ultimately tend to be on the far right.”

    I take a different lesson from all of this, namely that diffident administrators found some utility in the scolding of figures like Stephens as a rationale to crack down on student dissent and protect a status quo of administrative authority. If student demands are inherently unreasonable, they don’t need to be dealt with. I seem to recall a very popular book that invented an entire psychological pathology on the basis of a handful of campus incidents in order to delegitimize student speech people like Stephens didn’t like because it threatened authority.

    This was the core weakness, and it is coming home to roost, because the most important asset institutions have in defending themselves against the attacks of the Trump administration would be the students—provided there was a reservoir of trust between students and administrations, which, in many cases, there isn’t.

    The whole thing is a mess, and an existential one for universities. Stephens seems to think it’s possible that the current actions by Trump are “a loud shot across the bow of academia to get it to clean up its act.” This is, I fear, only additional delusion.

    I’d ask leaders of institutions who they think is going to be a bigger help in this situation, people like Stephens, who seem to believe that at least some measure of the arbitrary punishment is deserved, or the people who live and work in their communities, who understand the mission and importance of what these institutions try to do.

    Listen to the experts, particularly those on your own faculty, not the pundits.

    Source link

  • More engineering applications don’t make for more engineers

    More engineering applications don’t make for more engineers

    The latest UCAS data (applications by the January ‘equal consideration’ deadline) suggests a 14 per cent increase in applications to engineering and technology courses.

    It’s the second double-digit surge in two years.

    Good news, right? Sadly, it’s mostly not.

    STEM swing

    The upsurge in interest in engineering can be seen as part of a “swing to STEM” (science, technology, engineering, and medicine).

    As higher education has shifted to a reliance on student debt for funding, many people suspect applicants have felt greater pressure to search for clear, transactional returns which, it may seem, are offered most explicitly by STEM – and, most particularly, by engineering, which is not just STEM, but vocational too.

    Certainly, there’s a keen labour market for more engineers. Engineering UK has suggested the shortfall is around 29,000 graduates every year. According to the British Chambers of Commerce, it’s pretty much the largest skills gap in the UK economy.

    Engineering is also a key driver of the growth that the government is so keen to stimulate, adding £645b to the UK – that’s nearly a whopping third of the entire value of the economy. And – unlike financial services, say – engineering is a powerhouse of regional development as it is spread remarkably evenly throughout the country.

    And it drives that other key government mission, opportunity. An engineering degree confers a higher and more equal graduate premium than almost any other discipline.

    The downside

    So with all these benefits, why is the increase in engineering applications not good news?

    The answer is because it reveals the extent of the lost opportunity: most of these extra potential engineers will be denied places to study, dashing their hopes and the hopes of the country.

    Last year’s rise in applications did not lead to a rise in the number of UK engineering students. Absolute student numbers have more or less stagnated since 2019.

    It used to be that the number of engineering applications broadly aligned with places because it was a highly regarded discipline with great outcomes that universities would expand if they felt they could. The limiting factor was the number of able students applying.

    Now that demand outstrips supply, universities cannot afford to expand the places because each additional UK engineering student represents an ever-growing financial loss.

    Engineering courses are among the most expensive to teach. There are long contact hours and expensive facilities and materials. The EPC estimates the average cost per undergraduate to be around £18,800 a year. Even allowing for top-up funding that is available to many engineering degrees on top of the basic fee income, that leaves an average loss of £7,591 per year.

    It used to be that the way to address such losses was to try to admit more students to spread the fixed costs over greater numbers. That did run the risk of lowering standards, but it made financial sense.

    Now, however, for most universities, the marginal cost of each additional student means that the losses don’t get spread more thinly – they just keep piling up.

    Cross-subsidy

    The only way out is to bring in ever more international students to directly subsidise home undergraduates.

    Although the UCAS data shows a glimmer of hope for recovering international demand, at undergraduate level, there are only a few universities that can make this work. Most universities, even if they could attract more international engineering students, would no longer use the extra income to expand engineering for home students, but rather to shore up the existing deficits of maintaining current levels.

    The UCAS data also show higher tariff institutions are the main beneficiaries of application increases at the expense of lower tariff institutions which, traditionally have a wider access intake.

    What this means is that the increased demand for engineering places will not lead to a rise in engineering student numbers, let alone in skilled engineers, but rather a narrowing of the access to engineering such that it becomes ever harder to get in without the highest grades.

    High prior attainment correlates closely with socioeconomic advantage and so, rather than engineering playing to its strength of driving social mobility, it will run the risk of becoming ever more privileged.

    What about apprenticeships?

    Not to worry, suggests Jamie Cater, head of employment and skills at trade body Make UK, a university degree is not the only option available for acquiring these skills and “the apprenticeship route remains highly valued by manufacturers”.

    That’s small comfort, I’m afraid. The availability of engineering higher apprenticeships suggests competition is even fiercer than it is for degrees and, without the safeguard of fair access regulation, the apprenticeship access track record is poor. (And don’t get me started on drop-outs.)

    This is why I haven’t unfurled the bunting at applicants’ rising enthusiasm for engineering.

    Of course, it is wonderful that so many young people recognise engineering as a fulfilling and forward-looking discipline. An estimated £150m has been spent the last decade trying to stimulate this growth and there are over 600 third sector organisations working in STEM outreach in schools. It would be nice to think this has not been wasted effort.

    But it’s hard to celebrate a young person’s ambition to be an engineer if it’s likely to be thwarted. Similarly, I struggle to summon enthusiasm about kids wanting to get rich as TikTok influencers. Indeed, it’s all the more tragic when the country actually does need more engineers.

    This is why the Engineering Professors’ Council has recently called on the government to plug the funding gap in engineering higher education (and HE more widely) in the forthcoming Comprehensive Spending Review.

    Asking for nearly a billion pounds may seem ambitious, but the ongoing failure to fill the engineering skills gap may well be costing the country far more – possibly, given the importance of engineering to GDP, more than the entire higher education budget.

    Johnny Rich is Chief Executive of the Engineering Professors’ Council, the representative body for UK Engineering academics.

    Source link

  • Colleges were quiet after the Nov. election. Students don’t mind

    Colleges were quiet after the Nov. election. Students don’t mind

    Colleges can be hot spots for debate, inquiry and disagreement, particularly on political topics. Sometimes institutional leaders weigh in on the debate, issuing public statements or sharing resources internally among students, staff and faculty.

    This past fall, following the 2024 presidential election, college administrators were notably silent. A November Student Voice survey found a majority (63 percent) of student respondents (n=1,031) said their college did not do or say anything after the election, and only 17 percent released a statement to students about the election.

    A more recent survey from Inside Higher Ed and Generation Lab found this aligns with students’ preferences for institutional response.

    Over half (54 percent) of respondents (n=1,034) to a December Student Voice survey said colleges and universities should not make statements about political events, such as the outcome of the 2024 presidential election. One-quarter of students said they weren’t sure if institutions should make statements, and fewer than a quarter of learners said colleges should publish a statement.

    Across demographics—including institution size and classification, student race, political identification, income level or age—the greatest share of students indicated that colleges shouldn’t make statements. The only group that differed was nonbinary students (n=32), of whom 47 percent said they weren’t sure and 30 percent said no.

    Experts weigh in on the value of institutional neutrality and how college leaders can demonstrate care for learners without sharing statements.

    What’s the sitch: In the past, college administrators have issued statements, either personally or on behalf of the institution, to demonstrate care and concern for students who are impacted by world events, says Heterodox Academy president John Tomasi.

    “There’s also an element, a little more cynically, of trying to get ahead of certain political issues so they [administrators] couldn’t be criticized for having said nothing or not caring,” Tomasi says.

    Students Say

    Even with a majority of colleges and universities not speaking out after the 2024 election, some students think colleges are still being supportive.

    The November Student Voice survey found 35 percent of respondents believed their institution was offering the right amount of support to students after the election results, but 31 percent weren’t sure.

    The events of Oct. 7, 2023, proved complicated for statement-issuing presidents, with almost half of institutions that published statements releasing an additional response after the campus community or others pushed back. Initial statements, according to one analysis, often lacked caring elements, such as the impact to students or health and well-being of university community members in the region.

    A growing number of colleges and universities are choosing to opt out of public political conversations at the executive level, instead selecting to be institutionally neutral. Heterodox Academy, which tracks colleges’ commitments to neutrality, saw numbers rise from a dozen in 2023 to over 100 in 2024.

    Some students are experiencing political fatigue in general, says Vanderbilt University chancellor Daniel Diermeier, particularly relating to the war in Gaza. “This dynamic of ‘which side are you on, and if you’re not with me, you’re against me’ was troubling to many students and was exhausting and had a detrimental impact on the culture of learning, exploration and discussion.”

    Vanderbilt University has held a position of neutrality for many years, part of a free expression policy, which it defines as a “commitment to refrain from taking public positions on controversial issues unless the issue is materially related to the core mission and functioning of the university.”

    College students aren’t the only group that want fewer organizations to talk politics; a November survey by Morning Consult found two-thirds of Americans believe companies should stay out of politics entirely after the 2024 presidential election and 59 percent want companies to comment neutrally on the results.

    However, an earlier survey by Morning Consult found, across Americans, 56 percent believe higher education institutions are at least somewhat responsible for speaking out on political, societal or cultural issues, compared to 31 percent of respondents who say colleges and universities are not too or not at all responsible.

    Allowing students to speak: Proponents of institutional neutrality say the practice allows discourse to flourish on campus. Taking a position can create a chilling effect, in which people are afraid to speak out in opposition to the prevailing point of view, Diermeier says.

    Recent polls have shown today’s college students are hesitant to share their political opinions, often electing to self-censor due to fears of negative repercussions. Since 2015, this concern has grown, with 33 percent of respondents sharing that they feel uncomfortable discussing their political views on campus, compared to 13 percent a decade ago.

    Part of this hesitancy among students could be an overstepping on behalf of administrators that affirms the institution’s perspective on issues one way or another.

    “I hear from students that they want to be the ones making the statements themselves … and if a president makes a statement first, that kind of cuts off the conversation,” says Tomasi, who is a faculty member at Brown University.

    A majority of campus community members want to pursue learning and research, Diermeier says, and “the politicization that has taken hold on many university campuses … that is not what most students and faculty want.”

    Institutional neutrality allows a university to step back and empower students to be political agents, Tomasi says. “The students should be platformed, the professors should be platformed, but the university itself should be a neutral framework for students to do all those things.”

    Neutral, not silent: One distinction Tomasi and Diermeier make about institutional neutrality is that the commitment is not one of silence, but rather selective vocalization to affirm the university’s mission.

    “Neutrality can’t just be the neutrality of convenience,” Tomasi says. “It should be a neutrality of a principle that’ll endure beyond the particular conflict that’s dividing the campus, because it celebrates and stands for and flows from that high ideal of university life as a community of imperfect learners that does value intellectual pluralism.”

    Another area in which universities are obligated to speak up is if the issue challenges the core mission of an institution. Examples of this could include a travel ban against immigration from certain countries, a tax on endowments, a ban on divisive topics or scrutiny of admissions practices.

    “On issues that are core to the academic mission, we’re going to be vocal, we’re going to be engaged and we’re going to be advocates,” Diermeier says, and establishing what is involved in the core mission is key to each institution. “Inside the core doesn’t mean it’s not controversial—it just means it’s inside the core.”

    So what? For colleges and university leaders considering how to move forward, Diermeier and Tomasi offer some advice.

    • Start with the mission in mind. When working with learners, practitioners should strive to advance the mission of seeking knowledge and providing a transformative education, Diermeier says. For faculty in particular, it’s important to give students “room to breathe” and to be exposed to both sides of an argument, because there’s power in understanding another position, even if it’s not shared.
    • Create space for discourse. “It’s expected that the groups that are organized and vocal, they’re more in the conversation and claiming more of the space,” Diermeier says. “It’s our responsibility as leaders of universities to make sure that we are not being unduly influenced by that.” Students should be given the opportunity to engage in free speech, whether that’s protesting or counterprotesting, but that cannot dictate administrative decisions. Vanderbilt student organizations hosted debates and spaces for constructive dialogue prior to the election, which were well attended and respectful.
    • Lean into the discomfort. Advancing free speech and scholarship can be complicated and feel “unnatural,” Tomasi says, because humans prefer to find like-minded people and others who agree with their views, “but there’s something pretty elevated about it that’s attractive, too,” to students. Colleges and universities should consider how promoting discourse can help students feel they belong.
    • Provide targeted outreach. For some issues, such as natural disasters, colleges and universities can provide direct support and messaging to impacted students. “It’s just so much more effective and it can be targeted, and then the messages are also more authentic,” Diermeier says.

    Not yet a subscriber to our Student Success newsletter? Sign up for free here and you’ll receive practical tips and ideas for supporting students every weekday.

    Source link