Tag: dont

  • They Don’t Want to Learn About the Middle East (opinion)

    They Don’t Want to Learn About the Middle East (opinion)

    Being arrested by armed riot police on my own campus was not, somehow, the most jarring thing that has happened to me since the spring of 2024. More disturbing was the experience of being canceled by my hometown.

    In June 2024, I was supposed to give the second of two lectures in a series entitled “History of the Middle East and the Israeli-Palestinian Conflict” at the public library in San Anselmo, Calif., a leafy suburb of San Francisco best known as the longtime home of George Lucas.

    I grew up in San Anselmo during the Sept. 11 era and vividly remember how stereotypes and misperceptions of the Middle East were used to justify war in Iraq and discrimination against Arabs and Muslims at home. I was shaped by the commonplace refrains of that moment, especially that Americans needed to learn more about the Middle East. So, I did. I learned Arabic and Farsi and spent years abroad living across the region. I earned a Ph.D. in Middle Eastern history and am now a professor at a public university in Colorado. I see teaching as a means of countering the misrepresentations that generate conflict.

    But as the second lecture approached, I began receiving alarmed messages from the San Anselmo town librarian. She told me of a campaign to cancel the lecture so intense that discussions about how to respond involved the town’s elected officials, including the mayor. I was warned that “every word you utter tomorrow night will be scrutinized, dissected and used against you and the library” and that she had become “concerned for everyone’s well-being.” Just hours before it was scheduled to begin, the lecture was canceled.

    I later learned more about what had transpired. At a subsequent town council meeting, the librarian described a campaign of harassment and intimidation that included “increasingly aggressive emails” and “coordinated in-person visits” so threatening that she felt that they undermined the safe working environment of library staff.

    In Middle Eastern studies, such stories have become routine. A handful have received public attention—the instructor suspended for booking a room on behalf of a pro-Palestinian student organization, or the Jewish scholar of social movements investigated by Harvard University for supposed antisemitism. Professors have lost job offers or been fired. Even tenure is no protection. These well-publicized examples are accompanied by innumerable others which will likely never be known. In recent months, I have heard harrowing stories from colleagues: strangers showing up to classes and sitting menacingly in the back of the room; pressure groups contacting university administrators to demand that they be fired; visits from the FBI; a deluge of racist hate mail and death threats. It is no surprise that a recent survey of faculty in the field of Middle East Studies found that 98 percent of assistant professors self-censor when discussing Israel-Palestine.

    Compared to the professors losing their jobs and the student demonstrators facing expulsion—and even deportation—my experience is insignificant. It is nothing compared to the scholasticide in Gaza, where Israeli forces have systematically demolished the educational infrastructure and killed untold numbers of academics and students. But the contrast between my anodyne actions and the backlash they have generated illustrates the remarkable breadth of the censorship that permeates American society. The mainstream discourse has been purged not just of Palestinian voices, but of scholarly ones. Most significantly, censorship at home justifies violence abroad. Americans are once again living in an alternate reality—with terribly real consequences.


    On Oct. 7, 2023, it was clear that a deadly reprisal was coming. It was equally evident that no amount of force could free Israeli captives, let alone “defeat Hamas.” I contacted my university media office in hopes of providing valuable context. I had never given a TV interview before, so I spent hours preparing for a thoughtful discussion. Instead, I was asked if this was “Israel’s Pearl Harbor.”

    Well, no, I explained. It was the tragic and predictable result of a so-called peace process that has, for 30 years and with U.S. complicity, done little more than provide cover for the expansion of Israeli settlements. Violence erupts when negotiation fails. Only by understanding why people turn to violence can we end it. I watched the story after it aired. Nearly the whole interview was cut.

    I accepted or passed to colleagues all the interview requests that I received. But they soon dried up. Instead, I began receiving hate mail.

    It quickly became clear that I had to take the initiative to engage with the public. I held a series of historical teach-ins on campus. The audience was attentive, but small. I reached out to a local school district where I had previously provided curriculum advice. I never heard back. I contacted my high school alma mater and offered to speak there. They were too afraid of backlash. I was eventually invited to speak at two libraries, including San Anselmo’s. Everyone else turned me down.


    In April 2024, the Denver chapter of Students for a Democratic Society organized yet another protest in their campaign to pressure the University of Colorado to divest from companies complicit in the Israeli occupation. This event would be different. As one of the students spoke, others erected tents, launching what would become one of the longest-lasting encampments in the country.

    There was no cause for panic. The encampment did not interfere with classes or even block the walkway around the quad. Instead, it became the kind of community space that is all too hard to build on a commuter campus. It hosted speakers, prayer meetings and craft circles. But as I left a faculty meeting the day after the start of the encampment, I sensed that something was wrong. I arrived on the quad to find a phalanx of armed riot police facing down a short row of students standing hand in hand on the lawn.

    Fearing what would happen next, two colleagues and I joined the students and sat down, hoping to de-escalate the situation and avoid violence. The police surrounded us, preventing any escape. Then they were themselves surrounded by faculty, students and community members who were clearly outraged by their presence. We sat under the sun for nearly two hours as chaos swirled around us. The protesters cleared away the tents to demonstrate their compliance. It made no difference. Forty of us were arrested, zip-tied and jailed. I was charged with interference and trespassing. Others faced more serious charges. I was detained for more than 12 hours, until 3:00 in the morning.

    The arrests backfired. When the police departed, the protesters returned, invigorated by an outpouring of community support. I visited the encampment regularly over the following weeks. When the threat of war with Iran loomed, I gave a talk about Iranian history. When the activists organized their own graduation, they invited me to give a commencement address. I spoke about their accomplishments: that they had taken real risks, made real sacrifices and faced real consequences in order to do what was right. The encampment became the place where I could speak most freely, on campus or off.

    While the encampment came to an end in May, the prosecutions did not. The city offered me deferred prosecution, meaning that the matter would be dropped if I did not break the law for six months. I am not, to put it lightly, a seasoned lawbreaker, so the deal would have effectively made everything disappear. I turned it down. Accepting the offer would have prevented me from challenging the legality of the arrests, and I was determined to do what I could to prevent armed riot police from ever again suppressing a peaceful student demonstration. It was a matter of principle and precedent. A civil rights attorney agreed to represent me pro bono. I would fight the charges.


    During my pretrial hearings, I learned more about the cancellation of my lecture in San Anselmo. A local ceasefire group served the town with a freedom of information request that yielded hundreds of pages of emails. Two days before the talk was scheduled, one local resident sent an “all hands on deck” email that called for a coordinated campaign against my lecture “in hopes of getting it canceled.” A less technologically savvy recipient forwarded the message on to the library, providing an inside view.

    The denunciations presented a version of myself that I did not recognize. The letters relied on innuendo and misrepresentation. Many claimed that I was “pro-Hamas” or accused me of antisemitism, which they invariably conflated with criticism of Israeli policy. Several expressed concern about what I might say, rather than anything I have ever actually said, while others misquoted me. Fodder for the campaign came largely from media reports of my arrest and video of my commencement address, both taken out of context. One claimed that the talk was “a violation of multiple Federal and California Statutes.” Another claimed that I “seemed to promote ongoing violence”—the lawyerly use of the word “seemed” betraying the lack of evidence behind the accusation.

    Perhaps the most popular claim was that I am biased, an activist rather than a scholar. My opponents seemed especially offended by my use of the word “genocide.” But genocide is not an epithet—it is an analytical term that represents the consensus in my field. A survey of Middle East studies scholars conducted in the weeks surrounding the talk found that 75 percent viewed Israeli actions in Gaza as either “genocide” or “major war crimes akin to genocide.”

    I was most struck by how many people objected to the idea of contextualizing the Oct. 7 attack; one even called it “insulting.” But contextualization is not justification. Placing events in a wider frame is central to the study of history—indeed, it is why history matters. If violence is not explained by the twists and turns of events, it can only be understood as the product of intrinsic qualities—that certain people, or groups of people, are inherently violent or uncivilized. In the absence of context, bigotry reigns.

    I did what I could to fight back against the censorship campaign. After reading the library emails, I reached out to journalists at several local news outlets to inform them about the incident. None followed up. The only report ever published was written by an independent journalist on Substack.

    In the weeks leading up to my trial, I wrote an op-ed calling for the charges to be dropped. I noted that the protest was entirely peaceful until the police arrived. I asked how our students, especially our undocumented students or students of color, can feel safe on campus when the authorities respond to peaceful demonstrations by calling the police. I sent the article to a local paper. I never heard back. I sent it to a second. Then a third. None responded. It was never published.

    In October, prosecutors dropped the charges against me. The official order of dismissal stated that they did not believe that they had a reasonable likelihood of conviction. I have now joined a civil lawsuit against the campus police in the hope that it will make the authorities think twice before turning to the police to arrest student demonstrators.


    Scholars of the Middle East are caught in an inescapable bind. Activist spaces are the only ones left open to us, but we are dismissed as biased when we use them. We are invited to share our insights only if they are deemed uncontroversial by the self-appointed gatekeepers of the conventional wisdom. If we condemn—or even just name—the genocide unfolding before our eyes, we are deplatformed and silenced. The logic is circular and impenetrable. It is also poison to the body politic. It rests on a nonsensical conception of objectivity that privileges power over truth. This catch-22 is no novel creation of the new administration. The institutions most complicit in its creation are the pillars of society ostensibly dedicated to the pursuit of justice—the press, the courts and the academy itself. They have constricted the boundaries of respectable discourse until they fit comfortably within the Beltway consensus. Rather than confronting reality, they have become apologists for genocide and architects of the post-truth world. They have learned nothing from Iraq. Nor do they want to. They don’t want to learn about the Middle East.

    Alex Boodrookas is an assistant professor of history at Metropolitan State University of Denver. The opinions expressed here are his own and do not represent those of his employer.

    Source link

  • As Recession Risk Rises, Don’t Expect 2008 Repeat (opinion)

    As Recession Risk Rises, Don’t Expect 2008 Repeat (opinion)

    Months into the second Trump administration, clear trends are reshaping the higher education landscape. Economic uncertainty stemming from inconsistent tariff policies has left businesses and consumers grappling with unpredictability. Meanwhile, efforts by the administration and congressional leadership to overhaul federal funding for higher education, including cuts to research grants and proposed cuts to Pell Grants and student loans, have created significant challenges for the sector.

    The U.S. economy contracted slightly in the first quarter of 2025, with the administration’s erratic and unpredictable policies amplifying recession risks. These fluctuations have led some to draw comparisons to the 2008 Great Recession, particularly regarding public higher education. While some lessons of that recession for higher education, such as those related to state appropriations, remain relevant, others may not apply due to the administration’s unique policies and priorities.

    Since the 1980s, economic downturns have increasingly impacted public higher education, primarily due to state budget cuts. During the 1980 recession, state educational appropriations per full-time-equivalent student dropped by 6 percent but recovered to pre-recession levels by 1985. In contrast, during the 2008 Great Recession, funding fell by nearly 26 percent, and most states never fully restored funding to pre-recession levels before the COVID-19 pandemic once again disrupted budgets in 2020. This prolonged recovery left public institutions financially weakened, with reduced capacity to support students.

    More than a decade after the Great Recession, public institutions were struggling to regain the level of state funding they once received. This prolonged recovery significantly affected student loan borrowing. The Great Recession weakened higher education systems as states shifted funds to mandatory expenses and relied on the federal student loan system and Pell Grants to cover a growing share of students’ educational costs. As a result, when states reduce funding, students and their families shoulder more financial responsibility, leading to greater student loan debt.

    During the Great Recession, public institutions were operating with reduced funding and downsizing, even as rising joblessness drove more people to enroll in college. Before 2008, total enrollment in degree-granting institutions was about 18.3 million, but by 2011–12, it exceeded 21 million. This period marked the emergence of the modern student loan crisis. Public institutions, already strained by reduced funding, faced the dual challenge of accommodating more students while maintaining quality. For many students, especially those pursuing graduate degrees, borrowing became a necessity. The economic downturn exacerbated these trends, further entrenching reliance on debt to finance education.

    A future recession could have an even more pronounced impact on public higher education, particularly in terms of state funding. The recently passed House budget bill, which proposes substantial cuts to higher education and Medicaid, exacerbates this risk by forcing states to prioritize addressing these funding shortfalls. Consequently, as legislatures shift resources to more immediate needs, both states and students may find themselves unable to rely on federal aid to support education. Long-standing research indicates that states will prioritize health-care funding over higher education. This pattern suggests that recent state investments in higher education could be rolled back or significantly reduced, even before a recession takes hold.

    The financial pressures on public institutions are already evident. Some systems are considering closing branch campuses, while others are cutting programs, laying off staff or grappling with declining enrollments. In addition, public regional institutions are particularly at risk, as they depend heavily on state funding and serve many of the students most vulnerable to financial challenges. If a recession occurs, these institutions may face severe and rapid downsizing.

    Following downsizing, a key consideration is whether a future recession will lead to an enrollment rebound similar to that seen during the Great Recession. This issue can be analyzed through two key factors: (1) the severity of joblessness and (2) the availability of grants, scholarships and loans, as well as the repayment structures of those loans.

    During the 2008 crisis, unemployment peaked at 10 percent, double the pre-recession rate, with a loss of 8.6 million jobs. Higher unemployment historically benefits higher education as individuals seek to retool their skills during economic downturns. Economists predict that under the current administration, unemployment could rise from 4.1 percent to between 4.7 percent and 7.5 percent, though projections are uncertain due to volatile policies. While higher unemployment might lead more people to consider enrolling in college, proposed changes to financial aid policies could significantly dampen such trends.

    The House’s One Big Beautiful Bill Act introduces stricter eligibility requirements for Pell Grants, such as tying awards to minimum credit-hour thresholds. Students would need to enroll in at least 30 credit hours per year for maximum awards and at least 15 credit hours per year to qualify at all. Furthermore, the bill eliminates subsidized student loans, meaning students would accrue interest while still in school. This change could add an estimated $6,000 in debt per undergraduate borrower, increasing the financial burden on students and potentially deterring enrollment.

    On the repayment side, the proposed Repayment Assistance Plan would replace existing income-driven repayment options. Unlike current plans, RAP bases payments on adjusted gross income rather than discretionary income, resulting in higher monthly payments for lower-income borrowers. Although RAP ensures borrowers do not face negative amortization—which is important for borrowers’ financial and mental distress—the 30-year forgiveness timeline is longer than that of current IDR plans, and the lack of inflation adjustments makes it less appealing than current IDR plans. Together, these changes could discourage potential students, particularly those from low-income or disadvantaged backgrounds, and depress graduate student enrollment.

    The bill also introduces a risk-sharing framework that requires institutions to repay the federal government for a portion of unpaid student loans. This framework, based on factors such as student retention and default rates, could influence enrollment decisions. Institutions might avoid admitting students who pose financial risks, such as those from low-income backgrounds, with lower precollege performance or nonwhite students, thereby restricting access and perpetuating inequities. Alternatively, some institutions may opt out of the student loan system entirely, further limiting opportunities for those who rely on federal aid.

    Recent executive actions pausing international student visa interviews will hinder the ability to recruit international students and eliminate the potential for these students to help subsidize low-income domestic students. As a result, institutions have fewer resources to support key groups in the administration’s electoral base without burdening American taxpayers. These actions not only increase the cost of higher education but also appear inconsistent with a fiscally conservative ideology.

    Mass layoffs in the Department of Education have delayed financial aid processing and compliance and hindered institutions’ ability to support more low-income students during an economic downturn. These personnel play a critical role in ensuring that state higher education systems receive the funding needed to expand access for low-income students. During the last recession, their efforts were essential to fostering student success, but under the current administration, the federal government continues to be an unreliable partner.

    While lessons from the Great Recession may offer some insight for public higher education during a future recession, the financial context and the priorities of the administration and congressional majority leadership differ significantly. Unlike the Great Recession, the next economic downturn may not lead to a surge in higher education enrollment. Without proactive measures to protect funding, expand financial aid and increase opportunity, public higher education risks reduced capacity and declining student outcomes. These changes will likely undermine higher education’s role as a pathway to economic mobility and societal progress.

    Daniel A. Collier is an assistant professor of higher and adult education at the University of Memphis. His work focuses on higher education policy, leadership and issues like student loan debt and financial aid; recent work has focused on Public Service Loan Forgiveness. Connect with Daniel on Bluesky at @dcollier74.bsky.social.

    Michael Kofoed is an assistant professor of economics at the University of Tennessee, Knoxville. His research interests include the economics of education, higher education finance and the economics of financial aid; recent work has focused on online learning during COVID. Connect with Mike on X at @mikekofoed.

    Source link

  • A big reason why students with math anxiety underperform — they just don’t do enough math

    A big reason why students with math anxiety underperform — they just don’t do enough math

    Math anxiety isn’t just about feeling nervous before a math test. It’s been well-known for decades that students who are anxious about math tend to do worse on math tests and in math classes.

    But recently, some of us who research math anxiety have started to realize that we may have overlooked a simple yet important reason why students who are anxious about math underperform: They don’t like doing math, and as a result, they don’t do enough of it.

    We wanted to get a better idea of just what kind of impact math anxiety could have on academic choices and academic success throughout college. In one of our studies, we measured math anxiety levels right when students started their postsecondary education. We then followed them throughout their college career, tracking what classes they took and how well they did in them.

    We found that highly math-anxious students went on to perform worse not just in math classes, but also in STEM classes more broadly. This means that math anxiety is not something that only math teachers need to care about — science, technology and engineering educators need to have math anxiety on their radar, too.

    We also found that students who were anxious about math tended to avoid taking STEM classes altogether if they could. They would get their math and science general education credits out of the way early on in college and never look at another STEM class again. So not only is math anxiety affecting how well students do when they step into a STEM classroom, it makes it less likely that they’ll step into that classroom in the first place.

    Related: A lot goes on in classrooms from kindergarten to high school. Keep up with our free weekly newsletter on K-12 education.

    This means that math anxiety is causing many students to self-sort out of the STEM career pipeline early, closing off career paths that would likely be fulfilling (and lucrative).

    Our study’s third major finding was the most surprising. When it came to predicting how well students would do in STEM classes, math anxiety mattered even more than math ability. Our results showed that if you were a freshman in college and you wanted to do well in your STEM classes, you would likely be better off reducing your math anxiety than improving your math ability.

    We wondered: How could that be? How could math anxiety — how you feel about math — matter more for your academic performance than how good you are at it? Our best guess: avoidance.

    If something makes you anxious, you tend to avoid doing it if you can. Both in our research and in that of other researchers, there’s been a growing understanding that in addition to its other effects, math anxiety means that you’ll do your very best to engage with math as little as possible in situations where you can’t avoid it entirely.

    This might mean putting in less effort during a math test, paying less attention in math class and doing fewer practice problems while studying. In the case of adults, this kind of math avoidance might look like pulling out a calculator whenever the need to do math arises just to avoid doing it yourself.

    In some of our other work, we found that math-anxious students were less interested in doing everyday activities precisely to the degree that they thought those activities involved math. The more a math-anxious student thought an activity involved math, the less they wanted to do it.

    If math anxiety is causing students to consistently avoid spending time and effort on their classes that involve math, this would explain why their STEM grades suffer.

    What does all of this mean for educators? Teachers need to be aware that students who are anxious about math are less likely to engage with math during class, and they’re less likely to put in the effort to study effectively. All of this avoidance means missed opportunities for practice, and that may be the key reason why many math-anxious students struggle not only in math class, but also in science and engineering classes that require some math.

    Related: Experts share the latest research on how teachers can overcome math anxiety

    Math anxiety researchers are at the very beginning of our journey to understand ways to make students who are anxious about math stop avoiding it but have already made some promising suggestions for how teachers can help. One study showed that a direct focus on study skills could help math-anxious students.

    Giving students clear structure on how they should be studying (trying lots of practice problems) and how often they should be studying (spaced out over multiple days, not just the night before a test) was effective at helping students overcome their math anxiety and perform better.

    Especially heartening was the fact that the effects seen during the study persisted in semesters beyond the intervention; these students tended to make use of the new skills into the future.

    Math anxiety researchers will continue to explore new ways to help math-anxious students fight their math-avoidant proclivities. In the meantime, educators should do what they can to help their students struggling with math anxiety overcome this avoidance tendency — it could be one of the most powerful ways a math teacher can help shape their students’ futures.

    Rich Daker is a researcher and founder of Pinpoint Learning, an education company that makes research-backed tools to help educators identify why their students make mistakes. Ian Lyons is an associate professor in Georgetown University’s Department of Psychology and principal investigator for the Math Brain Lab.

    Contact the opinion editor at [email protected].

    This story about math anxiety was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Hechinger’s weekly newsletter.

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    Source link

  • No thank you, AI, I am not interested. You don’t get my data. #shorts

    No thank you, AI, I am not interested. You don’t get my data. #shorts

    No thank you, AI, I am not interested. You don’t get my data. #shorts

    Source link

  • Don’t let Texas criminalize free political speech in the name of AI regulation

    Don’t let Texas criminalize free political speech in the name of AI regulation

    This essay was originally published by the Austin American-Statesman on May 2, 2025.


    Texans aren’t exactly shy about speaking their minds — whether it’s at city hall, in the town square, or all over social media. But a slate of bills now moving through the Texas Legislature threatens to make that proud tradition a criminal offense.

    In the name of regulating artificial intelligence, lawmakers are proposing bills that could turn political memes, commentary and satire into crimes.

    Senate Bills 893 and 228, and House Bills 366 and 556, might be attempting to protect election integrity, but these bills actually impose sweeping restrictions that could silence ordinary Texans just trying to express their opinions.

    Take SB 893 and its companion HB 2795. These would make it a crime to create and share AI-generated images, audio recordings, or videos if done with the intent to “deceive” and “influence the result of an election.” The bill offers a limited safeguard: If you want to share any images covered by the bill, you must edit them to add a government-mandated warning label.

    But the bills never define what counts as “deceptive,” handing prosecutors a blank check to decide what speech crosses the line. That’s a recipe for selective enforcement and criminalizing unpopular opinions. And SB 893 has already passed the Senate.

    Vague laws and open-ended definitions shouldn’t dictate what Texans can say, how they can say it, or which tools they’re allowed to use.

    HB 366, which just passed the House, goes even further. It would require a disclaimer on any political ad that contains “altered media,” even when the content isn’t misleading. With the provisions applying to anyone spending at least $100 on political advertising, which is easily the amount a person could spend to boost a social media post or to print some flyers, a private citizen could be subject to the law.

    Once this threshold is met, an AI-generated meme, a five-second clip on social media, or a goofy Photoshop that gives the opponent a giant cartoon head would all suddenly need a legal warning label. No exceptions for satire, parody or commentary are included. If it didn’t happen in real life, you’re legally obligated to slap a disclaimer on it.

    HB 556 and SB 228 take a similarly broad approach, treating all generative AI as suspect and criminalizing creative political expression.

    These proposals aren’t just overkill, they’re unconstitutional. Courts have long held that parody, satire and even sharp political attacks are protected speech. Requiring Texans to add disclaimers to their opinions simply because they used modern tools to express them is not transparency. It’s compelled speech.

    Besides, Texas already has laws on the books to address defamation, fraud and election interference. What these bills do is expand government control over how Texans express themselves while turning political expression into a legal minefield.

    Fighting deception at the ballot box shouldn’t mean criminalizing creativity or chilling free speech online. Texans shouldn’t need a lawyer to know whether they can post a meme they made on social media or make a joke about a candidate.

    Political life in Texas has been known to be colorful, rowdy and fiercely independent — and that’s how it should stay. Vague laws and open-ended definitions shouldn’t dictate what Texans can say, how they can say it, or which tools they’re allowed to use.

    The Texas Legislature should scrap these overbroad AI bills and defend the Lone Star state’s real legacy: fearless, unapologetic free speech.

    Source link

  • Don’t Overlook Alumni as Asset for Advocacy (opinion)

    Don’t Overlook Alumni as Asset for Advocacy (opinion)

    With research contracts, cost recovery and student financial aid totaling billions of dollars on the line, many universities have called upon powerhouse external lobbying firms to defend against federal funding cuts and make the case for the public good that flows from higher education. Engaging external government relations experts can bring important perspective and leverage in this critical period, but this approach may not be scalable or sustainable across the nearly 550 research universities in large and small communities across the country.

    Fortunately, campuses have their own powerful asset for advocacy: alumni. Graduates know firsthand the benefits of higher education in their lives, professions and communities, and they can also give valuable feedback as campuses work to meet the challenges of this moment and become even better. The National Survey of College Graduates estimates that 72 million individuals hold at least a bachelor’s degree. Engaged well, alumni can be a force multiplier.

    Alumni often get attention in their role as donors. They will receive, on average, more than 90 email messages from their alma mater this year, many asking them to reflect on the value of their college experience and pay it forward. The most generous donors will be celebrated at events or visited personally by campus leaders. Millions and sometimes billions of dollars will be raised to advance campus missions.

    As generous as alumni donors may be, the effectiveness of their philanthropic support is linked to the even greater investments states and the federal government make in higher education. University leaders in fundraising and beyond have an obligation to provide alumni with candid information about the potential impacts of looming generational policy and funding shifts, along with opportunities to support their campus as advocates.

    In a crisis, information and attention necessarily flow first to on-campus constituents. Crisis communications and management plans may initially overlook alumni or underestimate the compelling role that they can play with both external and internal stakeholders. While most alumni are not on the campus, they are of the campus in deep and meaningful ways. And, unlike the handful of ultrawealthy alumni who have weighed in to the detriment of their Ivy League campuses, a broad group of alumni can bring practical wisdom and a voice of reason to challenging issues.

    Campus leaders now preparing for a long period of disruption should assess alumni engagement as part of this planning and gather their teams to consider:

    • How might alumni and development staff work with strategic communications, government relations staff and academic leaders to shape university messaging and advocacy?
    • What facts about policy and funding challenges do alumni need to understand in a media environment filled with misinformation?
    • How might alumni perspectives inform campus discourse about challenges to the institution’s values and academic freedom?
    • How might existing alumni programming provide opportunities for information-sharing between campus leaders, academic leaders and alumni?
    • How are campuses acknowledging and supporting alumni who are directly affected by changes in the federal workforce and economic disruption?

    This is a critical time for campus leaders to build bridges. Alumni can be a huge asset in this work. As degree holders, donors, professionals and citizens, engaged alumni know the specific value of their alma mater and of higher education broadly. They have stakes, authenticity and social capital, and they deserve the opportunity to add their voices.

    Lisa Akchin, senior counsel at RW Jones Agency and founder of On Purpose LLC, previously served as associate vice president for engagement and chief marketing officer at University of Maryland Baltimore County.

    Source link

  • So who says we don’t have post qualification admissions already?

    So who says we don’t have post qualification admissions already?

    In February 2022, then Secretary of State for Education Nadhim Zahawi told Parliament the Johnson government’s decision on post-qualification admissions.

    Clear as a welcome school bell, he stated “we will not be reforming the admissions system to a system of PQA at this time”.

    But who says that we don’t already have PQA?

    Admissions reform by stealth

    The “Decline My Place” button introduced by UCAS instead of Adjustment basically introduced PQA anyway. The only reason we haven’t noticed is that we were not, then, very focused on undergraduate home numbers. How things change.

    Let’s think about JCQ results day 2025. Let’s say I work at an institution in the Russell Group with good recruitment opportunities for UG home and some uncertainties (I enjoy understatement) about postgraduate international numbers. And let’s say I decide to make hundreds more spaces available than previously planned earlier in the cycle.

    But let’s also say that my colleagues further north, west and east do the same. I have a wonderfully smooth confirmation, accepting lots of well qualified and soon-to-be happy young people. I arrive on results day less stressed and tired than usual which is just as well because all hell breaks loose.

    From 8:00am until 1:00pm I am frantically confirming Clearing places and, I’m hitting refresh on our numbers forecast every 5 minutes. My blood pressure is rising as is my cake consumption (the renewable energy of choice for any self-respecting Admissions Office). I am desperately trying to work out if our gains are ahead of our losses.

    That’s because hundreds (more?) of our nurtured, valued and cultivated unconditional firm offer-holders have hit a button at UCAS and declined their place to go elsewhere. On top of this, for the first time in 2025, some who are still conditional have released themselves too. Fine, I hear you say – If you haven’t processed a decision you deserve to lose the student. But several of these students are still awaiting results (excluded from the requirement that Decline My Place is only for those with a complete set of Level 3 results).

    You may well ask where the problem is here.

    A better offer

    Well, these particular students are from schools and colleges where we have a partnership. Several have been on long-term aspiration-raising enrichment programmes with us for over two years. We have invested all we can in their (everyone must have one) journey. It’s just that they’ve had “a better offer”.

    This may be an offer from an institution in London where “our” student has been offered a big financial incentive, and which grew its Clearing intake from zero to 200 in two years. An offer from a delightful campus in the Midlands where “our” student will be very happy and which would not have been an option when only 45 Clearing places were available – but now there are 500. An offer from an exciting and vibrant institution in the north which can take “our” student for Economics – a real surprise as spaces are not often available for a subject like that, but then this university grew its Clearing intake from 200 to 885 over the last two cycles.

    These are all real examples from last year. Companies may well have to say that past performance is no guarantee of future results, but we wouldn’t select on the basis of predicted grades if it wasn’t to some degree – now would we?

    Personally I have always been in favour of PQA in theory. It is just that the jeopardy I enjoy about admissions doesn’t quite extend to the levels of uncertainty I predict for the few days after 14 August 2025. I wonder how many members of the UCAS Board and how many vice chancellors realise that there is, in a theoretical model that may very well be tested this summer, every possibility that every single firm accept that we have all secured, conditional or unconditional, melts on or before Results Day.

    They can all, with absolutely no controls (apart from a quick call to UCAS if you are still conditional) decline their place and go to the pub to celebrate “trading up”. If that isn’t PQA what is? I need another cake.

    Source link

  • Half of Colleges Don’t Grant Students Access to Gen AI Tools

    Half of Colleges Don’t Grant Students Access to Gen AI Tools

    Transformative. Disruptive. Game-changing. That’s how many experts continue to refer, without hyperbole, to generative AI’s impact on higher education. Yet more than two years after generative AI went mainstream, half of chief technology officers report that their college or university isn’t granting students institutional access to generative AI tools, which are often gratis and more sophisticated and secure than what’s otherwise available to students. That’s according to Inside Higher Ed’s forthcoming annual Survey of Campus Chief Technology/Information Officers with Hanover Research.

    There remains some significant—and important—skepticism in academe about generative AI’s potential for pedagogical (and societal) good. But with a growing number of institutions launching key AI initiatives underpinned by student access to generative AI tools, and increasing student and employer expectations around AI literacy, student generative AI access has mounting implications for digital equity and workforce readiness. And according to Inside Higher Ed’s survey, cost is the No. 1 barrier to granting access, ahead of lack of need and even ethical concerns.

    Ravi Pendse, who reviewed the findings for Inside Higher Ed and serves as vice president for information technology and chief information officer at the University of Michigan, a leader in granting students access to generative AI tools, wasn’t surprised by the results. But he noted that AI prompting costs, typically measured in units called tokens, have fallen sharply over time. Generative AI models, including open-source large language models, have proliferated over the same period, meaning that institutions have increasing—and increasingly less expensive—options for providing students access to tools.

    ‘Paralyzed’ by Costs

    “Sometimes we get paralyzed by, ‘I don’t have resources, or there’s no way I can do this,’ and that’s where people need to just lean in,” Pendse said. “I want to implore all leaders and colleagues to step up and focus on what’s possible, and let human creativity get us there.”

    According to the survey—which asked 108 CTOs at two- and four-year colleges, public and private nonprofit, much more about AI, digital transformation, online learning and other key topics—institutional approaches to student generative AI access vary. (The full survey findings will be released next month.)

    Some 27 percent of CTOs said their college or university offers students generative AI access through an institutionwide license, with CTOs at public nonprofit institutions especially likely to say this. Another 13 percent of all CTOs reported student access to generative AI tools is limited to specific programs or departments, with this subgroup made up entirely of private nonprofit CTOs. And 5 percent of the sample reported that students at their institution have access to a custom-built generative AI tool.

    Among community college CTOs specifically (n=22), 36 percent said that students have access to generative AI tools, all through an institutionwide license.

    Roughly half of institutions represented do not offer student access to generative AI tools. Some 36 percent of CTOs reported that their college doesn’t offer access but is considering doing so, while 15 percent said that their institution doesn’t offer access and is not considering it.

    Of those CTOs who reported some kind of student access to generative AI and answered a corresponding question about how they pay for it (n=45), half said associated costs are covered by their central IT budget; most of these are public institution CTOs. Another quarter said there are no associated costs. Most of the rest of this group indicated that funding comes from individual departments. Almost no one said costs are passed on to students, such as through fees.

    Among CTOs from institutions that don’t provide student access who responded to a corresponding question about why not (n=51), the top-cited barrier from a list of possibilities was costs. Ethical concerns, such as those around potential misuse and academic integrity, factored in, as well, followed by concerns about data privacy and/or security. Fewer said there is no need or insufficient technical expertise to manage implementation.

    “I very, very strongly feel that every student that graduates from any institution of higher education must have at least one core course in AI, or significant exposure to these tools. And if we’re not doing that, I believe that we are doing a disservice to our students,” Pendse said. “As a nation we need to be prepared, which means we as educators have a responsibility. We need to step up and not get bogged down by cost, because there are always solutions available. Michigan welcomes the opportunity to partner with any institution out there and provide them guidance, all our lessons learned.”

    The Case for Institutional Access

    But do students really need their institutions to provide access to generative AI tools, given that rapid advances in AI technology also have led to fewer limitations on free, individual-level access to products such as ChatGPT, which many students have and can continue to use on their own?

    Experts such as Sidney Fernandes, vice president and CIO of the University of South Florida, which offers all students, faculty and staff access to Microsoft Copilot, say yes. One reason: privacy and security concerns. USF users of Copilot Chat use the tool in a secure, encrypted environment to maintain data privacy. And the data users share within USF’s Copilot enterprise functions—which support workflows and innovation—also remains within the institution and is not used to train AI models.

    There’s no guarantee, of course, that students with secure, institutional generative AI accounts will use only them. But at USF and beyond, account rollouts are typically accompanied by basic training efforts—another plus for AI literacy and engagement.

    “When we offer guidance on how to use the profiles, we’ve said, ‘If you’re using the commercially available chat bots, those are the equivalent of being on social media. Anything you post there could be used for whatever reason, so be very careful,” Fernandes told Inside Higher Ed.

    In Inside Higher Ed’s survey, CTOs who reported student access to generative AI tools by some means were no more likely than the group over all to feel highly confident in their institution’s cybersecurity practices—although CTOs as a group may have reason to worry about students and cybersecurity generally: Just 26 percent reported their institution requires student training in cybersecurity.

    Colleges can also grant students access to tools that are much more powerful than freely available and otherwise prompt-limited chat bots, as well as tools that are more integrated into other university platforms and resources. Michigan, for instance, offers students access to an AI assistant and another conversational AI tool, plus a separate tool that can be trained on a custom dataset. Access to a more advanced and flexible tool kit for those who require full control over their AI environments and models is available by request.

    Responsive AI and the Role of Big Tech

    Another reason for institutions to lead on student access to generative AI tools is cultural responsiveness, as AI tools reflect the data they’re trained on, and human biases often are baked into that data. Muhsinah Morris, director of Metaverse programs at Morehouse College, which has various culturally responsive AI initiatives—such as those involving AI tutors that look like professors—said it “makes a lot of sense to not put your eggs in one basket and say that basket is going to be the one that you carry … But at the end of the day, it’s all about student wellness, 24-7, personalized support, making sure that students feel seen and heard in this landscape and developing skills in real time that are going to make them better.”

    The stakes of generative AI in education, for digital equity and beyond, also implicate big tech companies whose generative AI models and bottom lines benefit from the knowledge flowing from colleges and universities. Big tech could therefore be doing much more to partner on free generative AI access with colleges and universities, and not just on the “2.0” and “3.0” models, Morris said.

    “They have a responsibility to also pour back into the world,” she added. “They are not off the hook. As a matter of fact, I’m calling them to the carpet.”

    Jenay Robert, senior researcher at Educause, noted that the organization’s 2025 AI Landscape Study: Into the Digital AI Divide found that more institutions are licensing AI tools than creating their own, across a variety of capabilities. She said digital equity is “certainly one of the biggest concerns when it comes to students’ access to generative AI tools.” Some 83 percent of respondents in that study said they were concerned about widening the digital divide as an AI-related risk. Yet most respondents were also optimistic about AI improving access to and accessibility of educational materials.

    Of course, Robert added, “AI tools won’t contribute to any of these improvements if students can’t access the tools.” Respondents to the Educause landscape study from larger institutions were more likely those from smaller ones to report that their AI-related strategic planning includes increasing access to AI tools.

    Inside Higher Ed’s survey also reveals a link between institution size and access, with student access to generative AI tools through an institutionwide license, especially, increasing with student population. But just 11 percent of CTOs reported that their institution has a comprehensive AI strategy.

    Still, Robert cautioned that “access is only part of the equation here. If we want to avoid widening the digital equity divide, we also have to help students learn how to use the tools they have access to.”

    In a telling data point from Educause’s 2025 Students and Technology Report, more than half of students reported that most or all of their instructors prohibit the use of generative AI.

    Arizona State University, like Michigan, collaborated early on with OpenAI, but it has multiple vendor partners and grants student access to generative AI tools through an institutionwide license, through certain programs and custom-built tools. ASU closely follows generative AI consumption in a way that allows it to meet varied needs across the university in a cost-effective manner, as “the cost of one [generative AI] model versus another can vary dramatically,” said Kyle Bowen, deputy CIO.

    “A large percentage of students make use of a moderate level of capability, but some students and faculty make use of more advanced capability,” he said. “So everybody having everything may not make sense. It may not be very cost-sustainable. Part of what we have to look at is what we would describe as consumption-based modeling—meaning we are putting in place the things that people need and will consume, not trying to speculate what the future will look like.”

    That’s what even institutions with established student access are “wrestling with,” Bowen continued. “How do we provide that universal level of AI capability today while recognizing that that will evolve and change, and we have to be ready to have technology for the future, as well, right?”

    Source link

  • Don’t Give Trump Student, Faculty Names, Nationalities

    Don’t Give Trump Student, Faculty Names, Nationalities

    The American Association of University Professors is warning college and university lawyers not to provide the U.S. Education Department’s Office for Civil Rights the names and nationalities of students or faculty involved in alleged Title VI violations.

    The AAUP’s letter comes after The Washington Post reported last week that Education Department higher-ups directed OCR attorneys investigating universities’ responses to reports of antisemitism to “collect the names and nationalities of students who might have harassed Jewish students or faculty.” The department didn’t respond to Inside Higher Ed’s requests for comment Thursday.

    In a 13-page Wednesday letter to college and university general counsels’ offices, four law professors serving as AAUP counsel wrote that higher education institutions “are under no legal compulsion to comply.” The AAUP counsel further urged them “not to comply, given the serious risks and harms of doing so”—noting that the Trump administration is revoking visas and detaining noncitizens over “students’ and faculty members’ speech and expressive activities.” The administration has targeted international students and other scholars suspected of participating in pro-Palestinian advocacy.

    Title VI of the federal Civil Rights Act of 1964 prohibits discrimination based on, among other things, shared ancestry, which includes antisemitism. But the AAUP counsel wrote that “Title VI does not require higher education institutions to provide the personally identifiable information of individual students or faculty members so that the administration can carry out further deportations.”

    And Title VI investigations, they wrote, “are not intended to determine whether the students and faculty who attend these schools have violated any civil rights laws, let alone discipline or punish students or faculty.” They wrote that investigations are instead “intended to determine whether the institution itself has discriminated.”

    Providing this information to the federal government may violate the First Amendment rights of those targeted, plus the Family Educational Rights and Privacy Act and state laws, they wrote, adding that this information shouldn’t be turned over without “clear justification for the release of specific information related to a legitimate purpose in the context of a particular active investigation.”

    Source link

  • AI is new — the laws that govern it don’t have to be

    AI is new — the laws that govern it don’t have to be

    On Monday, Virginia Governor Glenn Youngkin vetoed House Bill 2094, the High-Risk Artificial Intelligence Developer and Deployer Act. The bill would have set up a broad legal framework for AI, adding restrictions to its development and its expressive outputs that, if enacted, would have put the bill on a direct collision course with the First Amendment.

    This veto is the latest in a number of setbacks to a movement across many states to regulate AI development that originated with a working group put together last year. In February, that group broke down — further indicating upheaval in a once ascendant regulatory push.

    While existing laws may or may not be applied prudently, the emerging trend away from hasty lawmaking and toward more deliberation bodes well for the intertwined future of AI and free speech.

    At the same time, another movement has gained steam. A number of states are turning to old laws, including those prohibiting fraud, forgery, discrimination, and defamation, which have long managed the same purported harms stemming from AI in the context of older technology.

    Gov. Youngkin’s HB 2094 veto statement echoed the notion that existing laws may suffice, stating, “There are many laws currently in place that protect consumers and place responsibilities on companies relating to discriminatory practices, privacy, data use, libel, and more.” FIRE has pointed to these abilities of current law in previous statements, part of a number of AI-related interventions we’ve made as the technology has come to dominate state legislative agendas, including in states like Virginia

    The simple idea that current laws may be sufficient to deal with AI initially eluded the thinking of many lawmakers but now is quickly becoming common sense in a growing number of states. While existing laws may be applied in ways prudent and not, the emerging trend away from hasty lawmaking and toward more deliberation bodes well for the intertwined future of AI and free speech.

    The regulatory landscape

    AI offers the promise of a new era of knowledge generation and expression, and these developments come at a critical juncture as AI development continues to advance towards that vision. Companies are updating their models at a breakneck pace, epitomized by OpenAI’s popular new image generation tool

    Public and political interest, fueled by fascination and fear, may thus continue to intensify over the next two years — a period during which AI, still emerging from its nascent stage, will remain acutely vulnerable to threats of new regulation. Mercatus Center Research Fellow and leading AI policy analyst Dean W. Ball has hypothesized that 2025 and 2026 could represent the last two years to enact the laws that will be in place before AI systems with “qualitatively transformative capabilities” are released.

    With AI’s rapid development and deployment as the backdrop, states have rushed to propose new legal frameworks, hoping to align AI’s coming takeoff with state policy objectives. Last year saw the introduction of around 700 bills related to AI, covering everything from “deepfakes” to the use of AI in elections. This year, that number is already approaching 900-plus.

    Texas’s TRAIGA, the Texas Responsible Artificial Intelligence Governance Act, has been the highest-profile example from this year’s wave of restrictive AI bills. Sponsored by Republican State Rep. Giovanni Capriglione, TRAIGA has been one of several “algorithmic discrimination” bills that would impose liability on developers, deployers, and often distributors of AI systems that may introduce a risk of “algorithmic discrimination.” 

    Other examples include the recently vetoed HB 2094 in Virginia, Assembly Bill A768 in New York, and Legislative Bill 642 in Nebraska. While the bills have several problems, most concerning are their inclusion of a “reasonable care” negligence standard that would hold AI developers and users liable if there is a greater than 50% chance they could have “reasonably” prevented discrimination. 

    Such liability provisions incentivize AI developers to handicap their models to avoid any possibility of offering recommendations that some might deem discriminatory or simply offensive — even if doing so curtails the models’ usefulness or capabilities. The “chill” of these kinds of provisions threatens a broad array of important applications. 

    In Connecticut, for instance, Children’s Hospitals have warned how the vagueness and breadth of such regulations could limit health care providers’ ability to use AI to improve cancer screenings. These bills also compel regular risk reports on the models’ expressive outputs, similar to requirements that were held as unconstitutional under the First Amendment in other contexts by a federal court last year.

    So far, only Colorado has enacted such a law. Its implementation, spearheaded by the statutorily authorized Colorado Artificial Intelligence Impact Task Force, won’t assuage any skeptics. Even Gov. Jared Polis, who conceived the task force and signed the bill, has said it deviates from standard anti-discrimination laws “by regulating the results of AI system use, regardless of intent,” and has encouraged the legislature to “reexamine the concept” as the law is finalized.

    With a mandate to resolve this and other points of tension, the task force has come up almost empty-handed. In its report last month, it reached consensus on only “minor … changes,” while remaining deadlocked on substantive areas such as the law’s equivalent language to TRAIGA on reasonable care.

    The sponsors of TRAIGA reached a similar impasse as it came under intense political scrutiny. Rep. Capriglione responded earlier this month by dropping TRAIGA in favor of a new bill, HB 149. Among HB-149’s provisions, many of which run headlong into protected expression, is a proposed statute that holds “an artificial intelligence system shall not be developed or deployed in a manner that intentionally results in political viewpoint discrimination” or that “intentionally infringes upon a person’s freedom of association or ability to freely express the person’s beliefs or opinions.” 

    But this new language overlooks a landmark Supreme Court ruling just last year that laws in Texas and Florida with similar prohibitions on political discrimination for social media raised significant First Amendment concerns. 

    A more modest alternative

    An approach different from that taken in Colorado and Texas appears to be taking root in Connecticut. Last year, Gov. Ned Lamont signaled he would veto Connecticut Senate Bill 2, a bill similar to the law Colorado passed. In reflecting on his reservations, he noted, “You got to know what you’re regulating and be very strict about it. If it’s, ‘I don’t like algorithms that create biased responses,’ that can go any of a million different ways.” 

    At a press conference at the time of the bill’s consideration, his office suggested existing Connecticut anti-discrimination laws could already apply to AI use in relevant areas like housing, employment, and banking.

    Attempting to solve all theoretical problems of AI, before the contours of its problems become clear, is not only impractical but risks stifling innovation and expression in ways that may be difficult to reverse.

    Scholars Jeffrey Sonnenfeld and co-author Stephen Henriques of Yale’s School of Management expanded on the idea, noting Connecticut’s Unfair Trade Practices Act would seem to cover major AI developers and small “deployers” alike. They argue that a preferable route to new legislation would be for the state attorney general to clarify how existing laws can remedy the harms to consumers that sparked Senate Bill 2 in the first place.

    Connecticut isn’t alone. In California, which often sets the standard for tech law in the United States, two bills — AB 2930, focusing on liability for algorithmic discrimination in the same manner as the Colorado and Texas bills, and SB 1047, focusing on liability for “hazardous capabilities” — both failed. Gov. Gavin Newsom, echoing Lamont, stressed in his veto statement for SB 1047, “Adaptability is critical as we race to regulate a technology still in its infancy.”

    Newsom’s attorney general followed up by issuing extensive guidance on how existing California laws — such as the Unruh Civil Rights Act, California Fair Employment and Housing Act, and California Consumer Credit Reporting Agencies Act — already provide consumer protections for issues that many worry AI will exacerbate, such as consumer deception and unlawful discrimination. 

    New JerseyOregon, and Massachusetts have offered similar guidance, with Massachusetts Attorney General Andrea Joy Campbell noting, “Existing state laws and regulations apply to this emerging technology to the same extent as they apply to any other product or application.” And in Texas, where HB 149 still sits in the legislature, Attorney General Ken Paxton is currently reaching settlements in cases about the misuse of AI products in violation of existing consumer protection law. 

    Addressing problems

    The application of existing laws, to be sure, must comport with the First Amendment’s broad protections. Accordingly, not all conceivable applications will be constitutional. But the core principle remains: states that are hitting the brakes and reflecting on the tools already available give AI developers and users the benefit of operating within established, predictable legal frameworks. 

    And if enforcement of existing laws runs afoul of the First Amendment, there is an ample body of legal precedent to provide guidance. Some might argue that AI poses different questions from prior technology covered by existing laws, but it departs in neither essence or purpose. Properly understood, AI is a communicative tool used to convey ideas, like the typewriter and the computer before it. 

    If there are perceived gaps in existing laws as AI and its uses evolve, legislatures may try targeted fixes. Last year, for example, Utah passed a statute clarifying that generative AI cannot serve as a defense to violations of state tort law — for example, a party cannot claim immunity from liability simply because an AI system “made the violative statement” or “undertook the violative act.” 

    Rather than introducing entirely new layers of liability, this provision clarifies accountability under existing statutes. 

    Other ideas floated include “regulatory sandboxes,” a voluntary way for private firms to test applications of AI technology in collaboration with the state in exchange for certain regulatory mitigation, the aim being to offer a learning environment for policymakers to study how law and AI interact over time, with emerging issues addressed by a regulatory scalpel rather than a hatchet. 

    This reflects an important point. The trajectory of AI is largely unknowable, as is how rules imposed now will affect this early-stage technology down the line. Well-meaning laws to prevent discrimination this year could preclude broad swathes of significant expressive activity in coming years.

    FIRE does not endorse any particular course of action, but this is perhaps the most compelling reason lawmakers should consider the more restrained approach outlined above. Attempting to solve all theoretical problems of AI before the contours of problems become clear is not only impractical, but risks stifling innovation and expression in ways that may be difficult to reverse. History also teaches that many of the initial worries will never materialize

    As President Calvin Coolidge observed, “If you see 10 troubles coming down the road, you can be sure that nine will run into the ditch before they reach you and you have to battle with only one of them.” We can address those that do materialize in a targeted manner as the full scope of the problems become clear.

    The wisest course of action may be patience. Let existing laws do their job and avoid premature restrictions. Like weary parents, lawmakers should take a breath — and maybe a vacation — while giving AI time to grow up a little.

    Source link