For the last two years, conversations about AI in education have tended to fall into two camps: excitement about efficiency or fear of replacement. Teachers worry they’ll lose authenticity. Leaders worry about academic integrity. And across the country, schools are trying to make sense of a technology that feels both promising and overwhelming.
But there’s a quieter, more human-centered opportunity emerging–one that rarely makes the headlines: AI can actually strengthen empathy and improve the quality of our interactions with students and staff.
Not by automating relationships, but by helping us become more reflective, intentional, and attuned to the people we serve.
As a middle school assistant principal and a higher education instructor, I’ve found that AI is most valuable not as a productivity tool, but as a perspective-taking tool. When used thoughtfully, it supports the emotional labor of teaching and leadership–the part of our work that cannot be automated.
From efficiency to empathy
Schools do not thrive because we write faster emails or generate quicker lesson plans. They thrive because students feel known. Teachers feel supported. Families feel included.
AI can assist with the operational tasks, but the real potential lies in the way it can help us:
Reflect on tone before hitting “send” on a difficult email
Understand how a message may land for someone under stress
Role-play sensitive conversations with students or staff
Anticipate barriers that multilingual families might face
Rehearse a restorative response rather than reacting in the moment
These are human actions–ones that require situational awareness and empathy. AI can’t perform them for us, but it can help us practice and prepare for them.
A middle school use case: Preparing for the hard conversations
Middle school is an emotional ecosystem. Students are forming identity, navigating social pressures, and learning how to advocate for themselves. Staff are juggling instructional demands while building trust with young adolescents whose needs shift by the week.
Some days, the work feels like equal parts counselor, coach, and crisis navigator.
One of the ways I’ve leveraged AI is by simulating difficult conversations before they happen. For example:
A student is anxious about returning to class after an incident
A teacher feels unsupported and frustrated
A family is confused about a schedule change or intervention plan
By giving the AI a brief description and asking it to take on the perspective of the other person, I can rehearse responses that center calm, clarity, and compassion.
This has made me more intentional in real interactions–I’m less reactive, more prepared, and more attuned to the emotions beneath the surface.
Empathy improves when we get to “practice” it.
Supporting newcomers and multilingual learners
Schools like mine welcome dozens of newcomers each year, many with interrupted formal education. They bring extraordinary resilience–and significant emotional and linguistic needs.
AI tools can support staff in ways that deepen connection, not diminish it:
Drafting bilingual communication with a softer, more culturally responsive tone
Helping teachers anticipate trauma triggers based on student histories
Rewriting classroom expectations in family-friendly language
Generating gentle scripts for welcoming a student experiencing culture shock
The technology is not a substitute for bilingual staff or cultural competence. But it can serve as a bridge–helping educators reach families and students with more warmth, clarity, and accuracy.
When language becomes more accessible, relationships strengthen.
AI as a mirror for leadership
One unexpected benefit of AI is that it acts as a mirror. When I ask it to review the clarity of a communication, or identify potential ambiguities, it often highlights blind spots:
“This sentence may sound punitive.”
“This may be interpreted as dismissing the student’s perspective.”
“Consider acknowledging the parent’s concern earlier in the message.”
These are the kinds of insights reflective leaders try to surface–but in the rush of a school day, they are easy to miss.
AI doesn’t remove responsibility; it enhances accountability. It helps us lead with more emotional intelligence, not less.
What this looks like in teacher practice
For teachers, AI can support empathy in similarly grounded ways:
1. Building more inclusive lessons
Teachers can ask AI to scan a lesson for hidden barriers–assumptions about background knowledge, vocabulary loads, or unclear steps that could frustrate students.
2. Rewriting directions for struggling learners
A slight shift in wording can make all the difference for a student with anxiety or processing challenges.
3. Anticipating misconceptions before they happen
AI can run through multiple “student responses” so teachers can see where confusion might arise.
4. Practicing restorative language
Teachers can try out scripts for responding to behavioral issues in ways that preserve dignity and connection.
These aren’t shortcuts. They’re tools that elevate the craft.
Human connection is the point
The heart of education is human. AI doesn’t change that–in fact, it makes it more obvious.
When we reduce the cognitive load of planning, we free up space for attunement. When we rehearse hard conversations, we show up with more steadiness. When we write in more inclusive language, more families feel seen. When we reflect on our tone, we build trust.
The goal isn’t to create AI-enhanced classrooms. It’s to create relationship-centered classrooms where AI quietly supports the skills that matter most: empathy, clarity, and connection.
Schools don’t need more automation.
They need more humanity–and AI, used wisely, can help us get there.
Timothy Montalvo, Iona University & the College of Westchester
Timothy Montalvo is a middle school educator and leader passionate about leveraging technology to enhance student learning. He serves as Assistant Principal at Fox Lane Middle School in Westchester, NY, and teaches education courses as an adjunct professor at Iona University and the College of Westchester. Montalvo focuses on preparing students to be informed, active citizens in a digital world and shares insights on Twitter/X @MrMontalvoEDU or on BlueSky @montalvoedu.bsky.social.
Latest posts by eSchool Media Contributors (see all)
From the mid‑19th century to today, U.S. interventions in Latin America and the Caribbean have consistently combined military force, political influence, and economic pressure. Across this long arc, millions of lives have been shaped—often shattered—by policies that prioritize strategic advantage over human flourishing. Today’s geopolitical tensions with Venezuela are the latest flashpoint in a historical pattern that rewards elites while exacting profound human costs.
Note on Timing: This article is intentionally posted on Christmas Day 2025, a day traditionally associated with peace, goodwill, and reflection, to underscore the contrast between those ideals and the ongoing human toll of U.S. militarism and intervention abroad. The symbolic timing is a reminder that while many celebrate, others suffer the consequences of policies driven by power, profit, and geopolitics.
A Critical Warning for Students and Young People
As Higher Education Inquirer has repeatedly argued, the United States’ military footprint—its wars, recruitment programs, and entanglements with higher education—has deep consequences not just abroad but at home. ROTC programs and military enlistment are often marketed as pathways to education and economic stability, but they also funnel young people into systems with long‑term obligations, moral hazards, and psychological risk. Prospective enlistees and their families should think twice before committing to military pathways that may bind them to morally questionable conflicts and institutional control.
Moreover, U.S. higher education has become deeply entwined with kleptocracy, militarism, and colonialism, supporting war economies and benefiting from federal research contracts with defense and intelligence partners that obscure the real human costs of empire. These warnings are especially salient in the context of Venezuela and similar interventions, where human toll and geopolitical stakes demand deeper scrutiny.
Smedley Butler: War Is a Racket and the Business Plot
Major General Smedley D. Butler, among the most decorated U.S. Marines, became one of the U.S. military’s most outspoken critics. In his 1935 War Is a Racket, Butler rejected romantic notions of military glory and exposed the economic motives behind many interventions:
“War is a racket. It always has been. It is possibly the oldest, easily the most profitable, surely the most vicious.”
“I spent 33 years and four months in active military service… being a high‑class muscle man for Big Business, for Wall Street and for the bankers. In short, I was a racketeer for capitalism.”
“Only a small inside group knows what it is about. It is conducted for the benefit of the very few at the expense of the masses.”
Butler’s warnings were not abstract. In 1933, he was approached to lead a coup against President Franklin D. Roosevelt, known as the Business Plot, which he publicly exposed. His testimony before Congress revealed how elite interests sought to use military power to overthrow democratic government, an episode that underscores his critique of war as a tool for entrenched interests at the expense of ordinary people.
Historical Interventions and Their Toll
Below is a timeline of major U.S. interventions in the Americas, with estimated deaths, showing the human cost of policies that often served strategic or economic interests over humanitarian ones:
Period
Location
Event / Nature of Intervention
Estimated Deaths
1846–1848
Mexico
Mexican-American War: Territorial conquest
~25,000 Mexicans
1898
Cuba/P.R.
Spanish-American War: U.S. seized P.R.; Cuba protectorate
~15,000–60,000 (90% disease)
1914
Mexico
Occupation of Veracruz: U.S. port seizure
~300 Mexicans
1915–1934
Haiti
Military Occupation: Suppression of rebellions
~3,000–15,000
1916–1924
Dominican Rep.
Marine Occupation: Control of customs/finance
~4,000
1954
Guatemala
Op. PBSuccess: CIA coup against Árbenz; led to civil war
150,000–250,000*
1965
Dominican Rep.
Op. Power Pack: U.S. intervention during civil war
Naval Blockade: Active maritime strikes and standoff
100+ (to date)
*Estimates include civilian casualties and deaths indirectly caused by U.S.-supported interventions.
Venezuela and the Global Politics of Intervention
Venezuela’s 2025 crisis is the latest in a long history of U.S. pressure in the hemisphere. A naval blockade—accompanied by maritime strikes and political isolation—has already produced more than 100 confirmed deaths. Historically, interventions like this have often prioritized U.S. strategic or economic interests over local welfare.
The situation is further complicated by global geopolitics. Former President Donald Trump, who recently pardoned key figures involved in controversial interventions, including Iran‑Contra actors, also maintains strategic ties with China and Russia, highlighting how interventions are entangled with global power plays that affect universities, recruitment pipelines, and domestic politics alike.
A Call to Rethink Intervention and Recruitment
Smedley Butler’s critique remains urgent: to “smash the racket,” profit must be removed from war, military force should be strictly defensive, and decisions about war must rest with those who bear its consequences. From Mexico to Venezuela—and including covert operations like Iran‑Contra—the historical record shows how interventions serve a narrow elite while imposing massive human costs.
HEI’s warnings underscore that higher education, ROTC programs, and military recruitment pipelines are not neutral pathways but deeply embedded parts of systems that reproduce extraction, militarism, and inequality. Students, educators, and families must critically evaluate the incentives and promises of military pathways and demand institutions that serve learning, opportunity, and justice rather than empire.
Sources
Butler, Smedley D. War Is a Racket. Round Table Press, 1935.
U.S. Congressional Record and Butler testimony on the Business Plot, 1934.
Kinzer, Stephen. Overthrow: America’s Century of Regime Change from Hawaii to Iraq.
Scott, Peter Dale. Cocaine Politics: Drugs, Armies, and the CIA in Central America.
Reporting on Trump pardons, Iran‑Contra participants, and global alliances (2020–2025).
Higher Education Inquirer, “Kleptocracy, Militarism, Colonialism: A Counterrecruiting Call for Students and Families,” December 7, 2025. (link)
Higher Education Inquirer, “The Hidden Costs of ROTC — and the Military Path,” November 28, 2025. (link)
Historical records on U.S. interventions: Mexican‑American War, Spanish‑American War, Guatemala (1954), Chile (1973), Argentina (1976–1983), El Salvador, Nicaragua, Panama, Venezuela (2025).
Officials also announced that the center will hone its primary focus to children’s health.
John Tlumacki/The Boston Globe/Getty Images
The director of Harvard University’s François-Xavier Bagnoud Center for Health and Human Rights will step down in January after seven years at the helm, dean of the Harvard T. H. Chan School of Public Health Andrea Baccarelli announced Tuesday. News of her departure follows months of criticism of the center’s Palestine Program for Health and Human Rights.
Mary Bassett’s last day as director will be Jan. 9, 2026, after which she will remain a professor of practice in the Social and Behavioral Sciences Department. Kari Nadeau, a professor of climate and population studies at Harvard, will serve as interim director. Bassett did not respond to a request for an interview Thursday. A Harvard spokesperson did not answer Inside Higher Ed’s questions about Bassett’s departure, including whether she was asked to step down, and instead pointed to Baccarelli’s message.
Baccarelli also announced that the center will shift its primary focus to children’s health.
“Over the past years, FXB has worked on a wide range of programs within the context of human rights, extending across varied projects, including those related to oppression, poverty, and stigma around the world,” he wrote. “We believe we can accomplish more, and have greater impact, if we go deeper in a primary area of focus.”
The center’s Palestine Program for Health and Human Rights drew increased scrutiny after Hamas’s Oct. 7, 2023, attack in Israel, including from former Harvard president Larry Summers and New York congresswoman Elise Stefanik. In previous years, the program partnered with Birzeit University in the West Bank, but Harvard declined to renew that partnership in the spring. In their April report on antisemitism on campus, Harvard officials detailed complaints from students about the program’s webinars, in which speakers allegedly “presented a demonizing view of Israel and Israelis.”
“One student told us that the FXB programming created the impression that ‘Israel exists solely to oppress Palestinians, and nothing else,’” the report stated.
This week is Thanksgiving in the United States, a time when many of us come together with family and friends to express gratitude for the positive things in our lives. The holiday season can also be a challenging time for those who are far from family and grappling with the prevalent loneliness of our modern era.
Perhaps worse than missing the company of others over the holidays is being with family who hold different views and beliefs from your own. The fact is, though, that when we come together with a large, diverse group of people at events we are bound to find a variety of viewpoints and personalities in the room.
People are complex and messy, and engaging with them is often a lot of work. Sometimes it seems easier to just not deal with them at all and “focus on ourselves” instead. Similarly, the vast amount of information available online often leads many graduate students and postdocs to think they can effectively engage in professional development, explore career options and navigate their next step on their own. Indeed, there are many amazing online tools and resources to help with a lot of this but only by engaging other people in conversation can we fully come to understand how various practices, experiences and occupations apply to us as unique beings in the world. Generic advice is fine, but it can only be tailored through genuine dialogue with another person, though some believe they can find it in a machine.
Generative artificial intelligence (AI) technology has accelerated since the launch of ChatGPT in November 2022 and now many people lean on AI chatbots for advice and even companionship. The problem with this approach is that AI chatbots are, at least currently, quite sycophantic and don’t, by default, challenge a user’s worldview. Rather, they can reinforce one’s current beliefs and biases. Furthermore, since we as humans have a tendency to anthropomorphize things, we perceive the output of AI chatbots as “human” and think we are getting the type of “social” relationship and advice we need from a bot without all the friction of dealing with another human being in real life. So, while outsourcing your problems to a chatbot may feel easy, it cannot fully support you as you navigate your life and career. Furthermore, generative AI has made the job application, screening and interview process incredibly impersonal and ineffective. One recent piece in The Atlantic put it simply (if harshly): “The Job Market is Hell.”
What is the solution to this sad state of affairs?
I am here to remind readers of the importance of engaging with real, human people to help you navigate your professional development, job search and life. Despite the fear of being rejected, making small talk or hearing things that may challenge you, engaging with other people will help you learn about professional roles available to you, discover unexpected opportunities, build critical interpersonal skills and, in the process, understand yourself (and how you relate with others) better.
For graduate students and postdocs today, it’s easy to feel isolated or spend too much time in your own head focusing on your perceived faults and deficiencies. You need to remember, though, that you are doing hard things, including leading research projects seeking to investigate questions no one else has reported on before. But as you journey through your academic career and into your next step professionally, I encourage you to embrace the fact that true strength and resilience lies in our connections—with colleagues, mentors, friends and the communities we build.
Networks enrich your perspectives, foster resilience and can help you find not only jobs, but joy and fulfillment along the way. Take intentional steps to build and lean on your community during your time as an academic and beyond. Invest time, gratitude and openness in your relationships. Because when you navigate life’s challenges with others by your side, you don’t just survive—you thrive.
Practical Tips for Building and Leveraging Networks
For graduate students and postdocs, here are some action steps to foster meaningful networks to help you professionally and personally:
Tip 1: Seek Diverse Connections
Attend seminars, departmental events, professional conferences and interest groups—both within and outside your field.
Join and engage in online forums, LinkedIn groups and professional organizations that interest you. Create a career advisory group.
Tip 2: Practice Gratitude and Generosity
Thank peers and mentors regularly—showing appreciation strengthens relationships, opens doors and creates goodwill.
Offer help, such as reviewing your peers’ résumés, sharing job leads or simply listening. Reciprocity is foundational to strong networks.
Tip 3: Be Vulnerable and Authentic
Share struggles and setbacks. Vulnerability invites others to connect, offer advice and foster mutual support.
Be honest about your goals; don’t feel pressured to follow predefined paths set by others or by societal norms.
Tip 4: Leverage Formal Resources
Enroll in career design workshops or online courses, such as Stanford University’s “Designing Your Career.”
Utilize university career centers, alumni networks and faculty advisers for information and introductions.
Tip 5: Make Reflection a Habit
Set aside time weekly or monthly to review progress, map goals and consider input from your network.
Use journaling or guided exercises to deepen self-insight and identify what you want from relationships and careers.
Focus not just on professional “résumé virtues,” but also on “eulogy virtues”—kindness, honesty, courage and the quality of relationships formed.
These provide lasting meaning and fuel deep, authentic connections that persist beyond job titles and paychecks.
Strategies for Overcoming Isolation
Graduate students and postdocs are at particular risk for isolation and burnout, given the demands of research and the often-solitary nature of scholarship. Community is a proven antidote. Consider forming small groups with fellow students and postdocs to share resources, celebrate milestones and troubleshoot professional challenges together. Regular meetings can foster motivation and accountability. These can be as simple as monthly coffee chats to something more structured such as regular writing or job search support groups. And, while online communities are not a perfect substitute for support, postdocs can leverage Future PI Slack and graduate students can use their own Slack community for help and advice. You can also lean on your networks for emotional support and practical help, especially during stressful periods or setbacks.
Another practical piece of advice to build your network and connections is volunteer engagement. This could mean volunteering in a professional organization, committees at your institution or in your local community. Working together with others on shared projects in this manner helps build connections without the challenges many have with engaging others at purely social events. In addition, volunteering can help you develop leadership, communication and management skills that can become excellent résumé material.
Networking to Launch Your Career
Through the process of engaging with more people through an expanded network you also open yourself up to serendipity and opportunities that could enhance your overall training and career. Career theorists call this “planned happenstance.” The idea is simple: By putting yourself in community with others—attending talks, joining professional groups, volunteering for committees—you increase the odds that unexpected opportunities will cross your path. You meet people who do work you hadn’t considered, learn about opportunities before they’re posted and hear about initiatives that need someone with your skills earlier than most.
When I was a postdoc at Vanderbilt University, I volunteered for the National Postdoctoral Association (NPA), starting small by writing for their online newsletter (The POSTDOCket), and also became increasingly involved in the Vanderbilt Postdoctoral Association (VPA). These experiences were helpful as I transitioned to working in postdoctoral affairs as a higher education administrator after my postdoc. Writing for The POSTDOCket as a postdoc allowed me to interview administrators and leaders in postdoctoral affairs, in the process learning about working in the space. My leadership in VPA showed I understood some of the needs of the postdoctoral community and could organize programming to support postdocs. I have become increasingly involved in the NPA over the past six years, culminating in being chair of our Board of Directors in 2025. This work has allowed me to increase my national visibility and has resulted in invites to speak to postdocs at different institutions, the opportunity to serve on a National Academies Roundtable, and I believe helped me land my current role at Virginia Tech.
I share all this to reiterate that in uncertain job markets, it’s tempting to focus on polishing résumés or applying to ever more positions online. Those things can matter—but they’re not enough. Opportunities often come through both expanding your network and engaging with people and activities we care about. They can present themselves to you via your network long before they appear in writing and they often can’t be fully anticipated when you initially engage with these “extracurricular activities.” A good first step to open yourself up to possibilities is to get involved in communities outside your direct school or work responsibilities. Doing so will improve your sense of purpose, help you build key transferrable skills, increase your connections and aid in your transition to your next role.
Your training and career should not be a solitary climb, but rather a collaborative, evolving process of growth and discovery. A strong community and network are critical to your longterm wellbeing and success. And, in a world where setbacks and uncertainty are inevitable, connection is the constant that turns possibility into progress.
Chris Smith is Virginia Tech’s postdoctoral affairs program administrator. He serves on the National Postdoctoral Association’s Board of Directors and is a member of the Graduate Career Consortium—an organization providing a national voice for graduate-level career and professional development leaders.
Last year, FIRE launched the Free Speech Dispatch, a regular series covering new and continuing censorship trends and challenges around the world. Our goal is to help readers better understand the global context of free expression. Want to make sure you don’t miss an update? Sign up for our newsletter.
Yet another university erodes academic freedom to appease Beijing
In August, I released Authoritarians in the Academy, my book about the relationship between higher education, authoritarian regimes, and the censorship that internationalization has introduced into colleges and universities. And this month, an investigation released by The Guardian provided a perfect example of how this influence and censorship play out, in this case in the UK.
Earlier this year, Sheffield Hallam University told professor Laura Murphy, whose work the university had previously touted, to abandon her research into Uyghurs and rights abuses in China. The ban ultimately lasted for eight months until the school reversed course and issued an apology in October after Murphy threatened legal action. The Guardianreports that “the instruction for Murphy to halt her research came six months after the university decided to abandon a planned report on the risk of Uyghur forced labour in the critical minerals supply chain.”
China’s censorship goes global — from secret police stations to video games
2025 is off to a repressive start, from secret police stations in New York to persecution in Russia, Kenya, and more.
There are multiple alleged reasons for the university’s decision to disavow research critical of the CCP, but they all boil down to fear of legal or financial retaliation from the same government at the center of academics’ investigations. Murphy suggested that Sheffield Hallam was “explicitly trading my academic freedom for access to the Chinese student market.” And this is a real challenge among university administrations today: fear that vindictive governments will punish noncompliant universities by cutting off their access to lucrative international student tuition.
Another likely reason was a warning from Sheffield Hallam’s insurance provider that it would no longer cover work produced by the university’s Helena Kennedy Centre for International Justice after a defamation suit from a company named in its research. The HKC has raised the ire of Chinese government officials before, leading to a block of Sheffield Hallam’s websites behind the Great Firewall. Regarding the ill will between CCP officials and the HKC, a university administrator wrote that “attempting to retain the business in China and publication of the [HKC] research are now untenable bedfellows” and complained of the negative effects on recruitment in the country, which looks to have suffered.
Most disturbing was a visit Chinese state security officials conducted in 2024 to the university’s Beijing office, where they questioned employees about the HKC’s research and the “message to cease the research activity was made clear.” An administrator said that “immediately, relations improved” when the university informed officials the research into human rights abuses would be dropped.
The university’s apology and reversal may not spell the end of the story. A South Yorkshire Police spokesperson suggested that, because of potential engagement with security officials in China, Sheffield Hallam may face investigation under the National Security Act related to a provision on “assisting a foreign intelligence service.”
NYC indie film festival falls victim to transnational repression
One of the most common misconceptions about free expression today is that nations with better speech protections are immune from the censorship in less free countries. Case in point: New Yorkers hoping to attend the IndieChina Film Festival, set to begin on Nov. 8, could not do so because of repression in China.
Organizer Zhu Rikun said relentless pressure necessitated the cancellation of the event, with film directors in and outside China telling him en masse that they could not attend or requesting their films not be shown. Human Rights Watch also reports that Chinese artist Chiang Seeta warned that “nearly all participating directors in China faced intimidation” and even those abroad “reported that their relatives and friends in China were receiving threatening calls from police.”
Zhu, whose parents and friends in China are reportedly facing harassment as well, thought it would “be better” after moving to the U.S. “It turns out I was wrong,” he said.
Worrying UN cybercrime treaty nets dozens of signatures, with a notable exception
Late last month, 72 nations including France, Qatar, and China signed a treaty purportedly intended to fight “cybercrime,” but that leaves the door open for authoritarian nations to use it to enlist other nations — free and unfree — in their campaign to punish political expression on the internet. As I explained last year as the proposal went to the General Assembly, among other problems, the treaty fails to sufficiently define a “serious” crime taking place on computer networks other than that it’s punishable by a four-year prison sentence or more.
You might see the immediate problem here: Many nations, including some who ultimately signed on to the treaty, regularly punish online expression with long prison terms. A single TikTok video or an X post that offends or insults government officials, monarchs, or religious bodies can land people around the world in prison — sometimes for decades.
Despite earlier statements of support from a representative for the United States on the Ad Hoc Committee on Cybercrime, the U.S. ultimately did not sign the treaty and “is unlikely to sign or ratify unless and until we see implementation of meaningful human rights and other legal protections by the convention’s signatories.”
That’s not all. There’s plenty more news about speech, tech, and the internet:
New amendments to Kenya’s Computer Misuse and Cybercrimes Act are worrying activists in the country, including one that grants the National Computer and Cybercrimes Coordination Committee authority to block material that “promotes illegal activities” or “extreme religious and cultic practices.”
Influencers, beware: the Cyberspace Administration of China released new regulations requiring social media users publishing material on “sensitive” topics like law and medicine to prove their qualifications to do so. Platforms will also be required to assist in verifying those qualifications.
The much-maligned Online Safety Act continues to create new concerns for free expression in the UK. TechRadar reports that regulatory body OfCom is “using an unnamed third-party tool to monitor VPN use,” one likely employing AI capabilities. VPN use is, to no surprise, spiking in the UK in response to mandated age-checks under the online safety regulations.
Brazil is employing a new AI-powered online speech monitor to collect material from social media and blogs that can be used for prosecution of hate speech offenders in the country. Hate speech convictions can result in serious punishment in Brazil, like the one levied against a comedian sentenced to over eight years for offensive jokes this year.
The European Union Council’s “Chat Control” proposal to scan online communications and files for CSAM appears to be moving forward. The latest proposal removes the obligation for service providers to scan all material but encourages it to be done voluntarily. However, the text of the proposal allows for a “mitigation measure” requiring providers deemed high risk to take “all appropriate risk mitigation measures.”
Apple and Android removed gay dating apps from their app stores in China after “an order from the Cyberspace Administration of China.” A spokesperson for Apple said, “We follow the laws in the countries where we operate.”
India has somewhat narrowed the scope of its vast internet takedown machine, limiting the authority of those who can demand platforms block material to officials who reach a certain rank of power. Those ordering removals will now also be required to “clearly specify the legal basis and statutory provision invoked” and “the nature of the unlawful act.”
Chief Minister Siddaramaiah of the Indian state Karnataka is threatening a new law against misinformation that will punish those “giving false information to people, and disturbing communal harmony.”
Swiss man Emanuel Brünisholz will spend ten days in prison next month after choosing not to pay a 600 Swiss francs fine from his incitement to hatred conviction. Brünisholz’s offense was this 2022 Facebook comment: “If you dig up LGBTQI people after 200 years, you’ll only find men and women based on their skeletons. Everything else is a mental illness promoted through the curriculum.”
A Spanish court acquitted a Catholic priest of hate speech charges after a yearslong investigation into his online criticisms of Islam, including a 2016 article, “The Impossible Dialogue with Islam.”
Russian censorship laws should not dictate expression in the NHL
NHL teams have decided to entirely abandon Pride warm up jerseys from their programming out of fear of retaliation against their Russian players.
Continuing its widespread censorship of what it deems “gay propaganda” or “extremist” material, Russian media regulator Roskomnadzor banned the world’s largest anime database last month. Roskomnadzor blamed the block on MyAnimeList’s content “containing information propagating non-traditional sexual relations and/or preference.”
Singapore plans to roll out a new online safety commission with authority to order platforms to block posts and ban users and to demand internet service providers censor material as well. Initially, it intends to address harms like stalking but will eventually also target “the incitement of enmity.”
South Sudan’s National Security Service released comedian Amath Jok after four days in detainment for insulting President Salva Kiir on TikTok, who she called “a big thief wearing a hat.” But Jok isn’t out of the woods yet. Authorities have indefinitely banned her from using social media.
South Korea seeks to punish expression targeting other nations
In response to controversial protests against China, a Democratic Party of Korea lawmaker is pushing for legislation to punish those who “defame or insult” countries and their residents or ethnic groups. The bill would punish false information with fines and prison terms up to five years, and “insulting” speech with up to a year.
That effort garnered support this month when President Lee Jae Myung said that “hate speech targeting specific groups is being spread indiscriminately, and false and manipulated information is flooding” social media. He called it “criminal behavior” beyond the bounds of free expression.
Media censorship from Israel to Kyrgyzstan to Tunisia
The BBC has apologized to President Trump over “the manner” in which a clip of his speech on Jan. 6, 2021, was edited to give “the mistaken impression that President Trump had made a direct call for violent action,” but notes that its UK-aired “Trump: A Second Chance?” program was not defamatory. It remains unclear whether Trump will still follow through on his threat to file a suit against the British outlet, but in earlier comments he claimed to have an “obligation” to do so.
By a vote of 50 to 41, Israel’s Knesset passed the first of three steps in the approval of the Law to Prevent Harm to State Security by a Foreign Broadcasting Authority, which would give authorities permanent power to shut down and seize foreign media they deem “harmful” without needing judicial review or approval.
A BBC journalist and Vietnamese citizen who returned home to renew their passport has not been allowed to leave the country for months. The journalist was reportedly held by police for questioning about their journalism.
Thai activist Nutthanit Duangmusit was sentenced to two years for lèse majesté for her part in conducting a 2022 opinion poll to “gauge public opinion about whether they agree with the King being allowed to exercise his authority as he wishes.”
A Kyrgyz court’s ruling declared two investigative media outlets as “extremist,” banned them from publishing, and made distribution of their work illegal.
Investigative outlet Nawaat received a disturbing surprise from Tunisian authorities on Oct. 31: a notice slipped under their office door without even a knock, warning them to suspend all activities for a month.
Tanzanian police warn against words or images causing “distress”
In response to protests over President Samia Suluhu Hassan’s reelection, Tanzanian authorities issued a disturbing warning to the country: text messages or online posts could have serious consequences. The mass text sent to Tanzanian residents warned, “Avoid sharing images or videos that cause distress or degrade someone’s dignity. Doing so is a criminal offense and, if found, strict legal action will be taken.”
Hundreds have indeed been charged with treason, including one woman whose offense was recommending that protesters buy gas masks for protection at demonstrations.
Masih Alinejad’s would-be killers sentenced to 25 years in prison
In 2022, journalist and women’s rights activist Masih Alinejad was the target of an Iran-coordinated assassination plot that culminated in a hit man arriving outside her New York home with an AK-47. Late last month, two men were sentenced for their involvement in the attempt. The men, Rafat Amirov and Polad Omarov, were handed 25 years each in a Manhattan federal court. Regarding the verdict, Alinejad said: “I love justice.”
Ailing novelist granted pardon from Algerian president
Some parting good news: Boualem Sansal, an 81-year-old French-Algerian novelist who is suffering from cancer, has been granted a presidential pardon after serving one year of a five year sentence. Sansal was arrested late last year and convicted of undermining national unity and insulting public institutions. His humanitarian pardon from Algerian president Abdelmadjid Tebboune comes after months of advocacy from European leaders.
One of the great ironies and great frustrations of my career teaching first-year college writing was having students enter our class armed with a whole host of writing strategies which they had been explicitly told they needed to know “for college,” and yet those strategies—primarily the following of prescriptive templates—were entirely unsuited to the experience students were going to have over the next 15 weeks of our course (and beyond).
I explored and diagnosed these frustrations in Why They Can’t Write, and while many other writing teachers in both high school and college shared that they’d seen and been equally (or more) frustrated by the same things. In the intervening years, there’s been some progress, but frankly, not enough, primarily because the structural factors that distorted how writing is taught precollege have not been addressed.
As long as writing is primarily framed as workforce preparation to be tested through standardization and quantification, students will struggle when invited into a more nuanced conversation that requires them to mine their own thoughts and experiences of the world and put those thoughts and experiences in juxtaposition with the ideas of others. The good news, in my experience, is that once invited into this struggle, many students are enthusiastic to engage, at least once they genuinely believe that you are interested in the contours of their minds and their experiences.
Clark calls for a “higher ed and secondary ed alliance” based in the values we all at least claim to share: free inquiry, self-determination and an appreciation for lives that are more than the “skills” we’re supposed to bring to our employers.
Something I can’t help but note is that the challenges college instructors are having getting students to steer clear of outsourcing their thinking to large language models would be significantly lessened if students had a greater familiarity with thinking during their secondary education years. Unfortunately, the system of indefinite future reward that has been reduced to pure transactions in exchange for grades and credentials has signaled that the outputs of the homework machine are satisfactory, so why not just give in?
When I go to campuses and schools and have the opportunity to speak to students, I try to list all kinds of reasons why they shouldn’t just give in, reasons which, in the end, boil down to the fact that being a big dumb-dumb who doesn’t know anything and can’t do anything without the aid of a predictive text-generation machine is simply an unfulfilling and unpleasant way to go through life.
In short, they will not be happy, even if they find ways to navigate their “work” with the aid of AI, because humans simply need more than this from our existences.
In a world where machines can handle the technical knowledge, the only differentiator is being human.
This is not news to those of us with those degrees, like my sister-in-law, who took her liberal arts degree from Denison University all the way to a general counsel job at a Fortune 300 company, or someone else with a far humbler résumé … me.
As I wrote in 2013 in this very space, the key to my success as an adult who has had to repeatedly adapt to a changing world is my liberal arts degrees, degrees that armed me with foundational and enduring skills that have served me quite well.
But, of course, it is about more than these skills. My pursuit of these degrees also allowed me to consider what a good life should be. That knowledge has put me in a position where—knock wood—I wake up just about every morning looking forward to what I have to do that day.
This is true even as the things I most care about—education, reading/writing, uh … democracy—appear to be inexorably crumbling around me. Perhaps this is because my knowledge of the value of humanistic study as something more than a route to a good job makes me more willing to fight for its continuation.
Sometimes when I encounter some hand-wringing about the inevitability of AI and the uncertainty of the future, I want to remind the fretful that we actually have a very sound idea of what we should be emphasizing, the same stuff we always should have been emphasizing—teaching, learning, living, being human.
We have clear notions of what this looks like. The main question now is if we have the collective will to move toward that future, or if we will give in to something much darker, much less satisfying and much less human.
This audio is auto-generated. Please let us know if you have feedback.
COLUMBUS, OHIO — Artificial intelligence-based products and software for college admissions and operations are proliferating in the higher education world.
How to choose from among them? Well, leaders can start by identifying a problem that is actually in need of an AI solution.
That is one of the core pieces of advice from a panel on deploying AI technology responsibly in college administration at the National Association for College Admission Counseling’s conference last week.
Jasmine Solomon, senior associate director of systems operations at New York University, described a “flooded marketplace” of AI products advertised for a range of higher ed functions, from tutoring systems to retention analytics to admissions chatbots.
“Define what your AI use case is, and then find the purpose-built tool for that,” Solomon said. “If you’re using a general AI model or AI tool for an unintended purpose, your result is going to be poor.”
Asking why before you buy
It’s also worth considering whether AI is the right tool.
“How does AI solve this problem better? Because maybe your team or the tools that you already have can solve this problem,” Solomon said. “Maybe you don’t need an AI tool for this.”
Experts on the panel pointed out that administrators also need to think about who will use the tool, the potential privacy pitfalls of it, and its actual quality.
As Solomon put it, “Those built-in AI features — are they real? Are they on a future-release schedule, or is it here now? And if it’s here now, is it ready for prime time or is it ‘here now, and we’re beta testing.’”
Other considerations in deploying AI include those related to ethics, compliance and employee contracts.
Institutions need to be mindful of workflows, staff roles, data storage, privacy and AI stipulations in collective bargaining contracts, said Becky Mulholland, director of first-year admission and operations at the University of Rhode Island.
“For those who are considering this, please, please, please make sure you’re familiar with those aspects,” Mulholland said. “We’ve seen this not go well in some other spaces.”
On top of all that is the environmental impact of AI. One estimate found that AI-based search engines can use as much as 30 times more energy than traditional search.The technology also uses vast amounts of water to cool data centers.
Panelists had few definitive answers for resolving AI’s environmental problems at the institutional level.
“There’s going to be a space for science to find some better solutions,” Mulholland said. “We’re not there right now.”
Solomon pointed to the pervasiveness of AI tools already embedded in much of our digital technology and argued untrained use could worsen the environmental impact.
“If they’re prompting [AI] 10, 20 times just to get the answer they want, they’ve used far more energy than if they understood prompt engineering,” Solomon said.
Transparency is also important. At NYU, Solomon said the university was careful to ensure prospective students knew they were talking with AI when interacting with its chatbot — so much so that they named the tool “NYUAdmissionsBot” to make its virtual nature as explicit as possible.
“We wanted to inform them every step of the way that you were talking to AI when you were using this chatbot,” Solomon said.
‘You need time to test it’
After all the big questions are asked and answered, and an AI solution chosen, institutions still have the not-so-small task of rolling the technology out in a way that is effective in both the short and long term.
The rollout of NYU’s chatbot in spring 2024 took “many, many months,” according to Solomon. “If a vendor tells you, ‘We will be up in a week,’ multiply that by like a factor of 10. You need time to test it.” The extra time can ensure a feature is actually ready when it’s unveiled for use.
The upside to all that time and effort for something like an admissions chatbot, Solomon noted, is that the AI feature can be available around-the-clock to answer inquiries, and it can quickly address the most commonly asked questions that would normally be flooding the inboxes of admissions staff.
But even after a successful initial rollout of an AI tool or feature, operations staff aren’t done.
Solomon described a continuous cycle of developing key metrics of success, running controlled experiments with an AI product and carefully examining data from AI use, including by having a human looking over the shoulder of the robots. In NYU’s case, this included looking at responses the chatbot gave to inquiries from prospective students.
“AI is evolving rapidly. So every six months, you really do want to test again, because it will be different,” Solomon said. “We did find that as we moved forward, we could decrease the number of hard-coded responses and rely more on the generative. And that was because the AI got better, but also because our knowledge got better.”
Solomon recommended regular error checks and performance audits and warned against overreliance on AI.
“AI is not a rotisserie. You don’t set it and forget it. It will burn it down,” she said. “It’s changing too fast.”
The rapid adoption and development of AI has rocked higher education and thrown into doubt many students’ career plans and as many professors’ lesson plans. The best and only response is for students to develop capabilities that can never be authentically replicated by AI because they are uniquely human. Only humans have flesh and blood bodies. And these bodies are implicated in a wide range of Uniquely Human Capacities (UHCs), such as intuition, ethics, compassion, and storytelling. Students and educators should reallocate time and resources from AI-replaceable technical skills like coding and calculating to developing UHCs and AI skills.
Adoption of AI by employers is increasing while expectations for AI-savvy job candidates are rising. College students are getting nervous. 51% are second guessing their career choice and 39% worry that their job could be replaced by AI, according to Cengage Group’s 2024 Graduate Employability Report. Recently, I heard a student at an on-campus Literacy AI event ask an OpenAI representative if she should drop her efforts to be a web designer. (The representative’s response: spend less time learning the nuts and bolts of coding, and more time learning how to interpret and translate client goals into design plans.)
At the same time, AI capabilities are improving quickly. Recent frontier models have added “deep research” (web search and retrieval) and “reasoning” (multi-step thinking) capabilities. Both produce better, more comprehensive, accurate and thoughtful results, performing broader searches and developing responses step-by-step. Leading models are beginning to offer agentic features, which can do work for us, such as coding, independently. American AI companies are investing hundreds of billions in a race to develop Artificial General Intelligence (AGI). This is a poorly defined state of the technology where AI can perform at least as well as humans in virtually any economically valuable cognitive task. It can act autonomously, learn, plan, and adapt, and interact with the world in a general flexible way, much as humans do. Some experts suggest we may reach this point by 2030, although others have a longer timeline.
Hard skills that may be among the first to be replaced are those that AI can do better, cheaper, and faster. As a general-purpose tool, AI can already perform basic coding, data analysis, administrative, routine bookkeeping and accounting, and illustration tasks that previously required specialized tools and experience. I have my own mind-blowing “vibe-coding” experience, creating custom apps with limited syntactical coding understanding. AIs are capable of quantitative, statistical, and textual analysis that might have required Excel or R in the past. According to Deloitte, AI initiatives are touching virtually every aspect of a companies’ business, affecting IT, operations, marketing the most. AI can create presentations driven by natural language that make manual PowerPoint drafting skills less essential.
Humans’ Future-Proof Strategy
How should students, faculty and staff respond to the breathtaking pace of change and profound uncertainties about the future of labor markets? The OpenAI representative was right: reallocation of time and resources from easily automatable skills to those that only humans with bodies can do. Let us spend less time teaching and learning skills that are likely to be automated soon.
Technical Skills OUT
Uniquely Human Capacities IN
Basic coding
Mindfulness, empathy, and compassion
Data entry and bookkeeping
Ethical judgment, meaning making, and critical thinking
Mastery of single-purpose software (e.g., PowerPoint, Excel, accounting apps)
Authentic and ethical use of generative and other kinds of AI to augment UHCs
Instead, students (and everyone) should focus on developing Uniquely Human Capacities (UHCs). These are abilities that only humans can authentically perform because they need a human body. For example, intuition is our inarticulable and immediate knowledge that we know somatically, in our gut. It is how we empathize, show compassion, evaluate morality, listen and speak, love, appreciate and create beauty, play, collaborate, tell stories, find inspiration and insight, engage our curiosity, and emote. It is how we engage with the deep questions of life and ask the really important questions.
According to Gholdy Muhammad in Unearthing Joy, a reduced emphasis on skills can improve equity by creating space to focus on students’ individual needs. She argues that standards and pedagogies need to also reflect “identity, intellectualism, criticality, and joy.” These four dimensions help “contextualize skills and give students ways to connect them to the real world and their lives.”
The National Association of Colleges and Employers has created a list of eight career readiness competencies that employers say are necessary for career success. Take a look at the list below and you will see that seven of the eight are UHCs. The eighth, technology, underlines the need for students and their educators to understand and use AI effectively and authentically.
For example, an entry-level finance employee who has developed their UHCs will be able to nimbly respond to changing market conditions, interpret the intentions of managers and clients, and translate these into effective analysis and creative solutions. They will use AI tools to augment their work, adding greater value with less training and oversight.
Widen Humans’ Comparative Advantage
As demonstrated in the example above, our UHCs are humans’ unfair advantage over AI. How do we develop them, ensuring the employability and self-actualization of students and all humans?
The foundation is mindfulness. Mindfulness is about being fully present with ourselves and others, and accepting, primarily via bodily sensations, without judgment and preference. It allows us to accurately perceive reality, including our natural intuitive connection with other humans, a connection AI cannot share. Mindfulness can be developed during and beyond meditation, moments of stillness devoted to mindfulness. Mindfulness practice has been shown to improve self-knowledge, set career goals, and improve creativity.
Mindfulness supports intuitive thinking and metacognition, our ability to think clearly about thinking. Non-conceptual thinking, using our whole bodies, entails developing our intuition and a growth mindset. The latter is about recognizing that we are all works in progress, where learning is the product of careful risk-taking, learning from errors, supported by other humans.
These practices support deep, honest, authentic engagement with other humans of all types. (These are not available over social media.) For students, this is about engaging with each other in class, study groups, clubs, and elsewhere on campus, as well as engaging with faculty in class and office hours. Such engagement with humans can feel unfamiliar and awkward as we emerge from a pandemic. However, these interactions are a critical way to practice and improve our UHCs.
Literature and cinema are ways to engage with and develop empathy and understanding of humans you do not know, may not even be alive or even exist at all. Fiction is maybe the only way to experience in the first person what a stranger is thinking and feeling.
Indeed, every interaction with the world is an opportunity to practice those Uniquely Human Capacities (UHCs):
Use your imagination and creativity to solve a math problem.
Format your spreadsheet or presentation or essay so that it is beautiful.
Get in touch with the feelings that arise when faced with a challenging task.
Many students tell me they are in college to better support and care for family. As you do the work, let yourself experience as an act of love for them.
AI Can Help Us Be Better Humans
AI usage can dull our UHCs or sharpen them. Use AI to challenge us to improve our work, not to provide short cuts that make our work average, boring, or worse. Ethan Mollick (2024) describes the familiar roles AIs can profitably play in our lives. Chief among these is as a patient, always available, if sometimes unreliable tutor. A tutor will give us helpful and critical feedback and hints but never the answers. A tutor will not do our work for us. A tutor will suggest alternative strategies and we can instruct them to nudge us to check on our emotions, physical sensations and moral dimensions of our work. When we prompt AI for help, we should explicitly give it the role of a tutor or editor (as I did with Claude for this article).
How do we assess whether we and our students are developing their UHCs? We can develop personal and work portfolios that tell the stories of connections, insights, and benefits to society we have made. We can get honest testimonials of trusted human partners and engage in critical yet self-compassionate introspection, and journalling. Deliberate practice with feedback in real life and role-playing scenarios can all be valuable. One thing that will not work as well: traditional grades and quantitative measures. After all, humanity cannot be measured.
In a future where AI or AGI assumes the more rote and mechanical aspects of work, we humans are freed to build their UHCs, to become more fully human. An optimistic scenario!
What Could Go Wrong?
The huge, profit-seeking transnational corporations that control AI may soon feel greater pressure to show a return on enormous investment to investors. This could cause costs for users to go up, widening the capabilities gap between those with means and the rest. It could also result in Balkanized AI, where each model is embedded with political, social, and other biases that appeal to different demographics. We see this beginning with Claude, prioritizing safety, and Grok, built to emphasize free expression.
In addition, AI could get good enough at faking empathy, morality, intuition, sense making, and other UHCs. In a competitive, winner-take-all economy with even less government regulation and leakier safety net, companies may aggressively reduce hiring at entry level and of (expensive) high performers. Many of the job functions of the former can be most easily replaced by AI. Mid-level professionals can use AI to perform at a higher level.
Finally, and this is not an exhaustive list: Students and all of us may succumb to the temptation of using AI short cut their work, slowing or reversing development of critical thinking, analytical skills, and subject matter expertise. The tech industry has perfected, over twenty years, the science of making our devices virtually impossible to put down, so that we are “hooked.”
Keeping Humans First
The best way to reduce the risks posed by AI-driven change is to develop our students’ Uniquely Human Capacities while actively engaging policymakers and administrators to ensure a just transition. This enhances the unique value of flesh-and-blood humans in the workforce and society. Educators across disciplines should identify lower value-added activities vulnerable to automation and reorient curricula toward nurturing UHCs. This will foster not only employability but also personal growth, meaningful connection, and equity.
Even in the most challenging scenarios, we are unlikely to regret investing in our humanity. Beyond being well-employed, what could be more rewarding than becoming more fully actualized, compassionate, and connected beings? By developing our intuitions, morality, and bonds with others and the natural world, we open lifelong pathways to growth, fulfillment, and purpose. In doing so, we build lives and communities resilient to change, rich in meaning, and true to what it means to be human.
The article represents my opinions only, not necessarily those of the Borough of Manhattan Community College or CUNY.
Brett Whysel is a lecturer in finance and decision-making at the Borough of Manhattan Community College, CUNY, where he integrates mindfulness, behavioral science, generative AI, and career readiness into his teaching. He has written for Faculty Focus, Forbes, and The Decision Lab. He is also the co-founder of Decision Fish LLC, where he develops tools to support financial wellness and housing counselors. He regularly presents on mindfulness and metacognition in the classroom and is the author of the Effortless Mindfulness Toolkit, an open resource for educators published on CUNY Academic Works. Prior to teaching, he spent nearly 30 years in investment banking. He holds an M.A. in Philosophy from Columbia University and a B.S. in Managerial Economics and French from Carnegie Mellon University.
The creation of a new super university in South East England, through the merger of Kent and Greenwich, signals both a turning point and a warning.
Advocates see consolidation as the promise of scale and resilience.
Critics fear homogenisation, loss of identity, and narrowing of choice.
Both could be right.
What matters most is not the merger itself but the logic that underpins it. In the absence of a shared national mission for higher education, mergers are now framed as solutions: a form of market rationalisation presented as vision.
The vacuum where mission should be
Since the 2012 funding reforms, higher education has been treated less as civic infrastructure and more as a competitive market. Public investment was replaced by loans. Students were told to think like investors. Degrees became receipts.
Into the gap left by an absence of national purpose rushed hyper-regulation: metrics, thresholds, and questions of fiscal viability. Within this narrowed frame, mergers appear logical. Bigger looks cheaper. Consolidation looks like progress. But without a shared mission, the deeper questions go unanswered.
The long contraction
For much of the last century, almost every town in Britain had its own art school: civic in origin, modest in scale, and rooted in place. In the 1960s there were over 150 across England. Over time, that dispersed civic network was redrawn. Some schools were absorbed into polytechnics, some federated into new structures, many disappeared.
From this history, four models emerged: the consolidated metropolitan brand, uniting multiple colleges under one identity; the regional federation spread across towns and cities; the specialist regional provider rooted in place; and the art school absorbed into a larger university. All four persist, but history shows how quickly the civic and regional variants were erased in the pursuit of scale. That remains the risk.
The limits of consolidation
Super universities are most often justified through promises of efficiency and resilience. The patterns of merger and acquisition are familiar, exercised through cuts, closures, and the stripping back of provision. Contraction is presented as progress.
And what follows: a merger into an “Ultra Super University”, a “Mega University”? The logic of consolidation always points in that direction. Fewer institutions. The illusion that size solves structural problems.
But what if the future of universities is regional, hybrid and networked? Do mergers enable this? Or do they reduce it, by erasing local presence in the pursuit of efficiency?
The risk is not only that provision shrinks, but that our regional and civic anchors are lost. A university’s resilience lies not in the absence of difference but in its presence: in the tolerance of variety, the recognition of locality, and the capacity to sustain attachment.
Federation of art schools
UCA grew from a federation of art schools, distributed rather than centralised, holding to a civic model of place. This has been hard to sustain in today’s free market. However, our University has become a place for those who find belonging in community, for outliers and outsiders at home in the intimacy of a civic setting, rather than the intensity of the metropolis. Our resilience shows how creative specialist schools can generate strength from vulnerability. Our story also foreshadows the systemic pressures now confronting universities everywhere.
The Kent–Greenwich merger now brings new possibilities for Medway, positioned between Greenwich and Kent and home to a university campus for them both. If approached with care, it could restore creative presence to a place long on the periphery.
Our civic project persists at Canterbury School of Art, Architecture and Design. Our founder, Sidney Cooper, a local painter, established Canterbury’s School of Art in 1868 as a gift to the city. It has survived every reform since. In the 1960s it moved into a modernist building, future-facing yet rooted in the Garden of England.
That identity carried it through polytechnic consolidation, university expansion, and marketisation. It remains its strength now: an art school for the city, of the city, and in the city. Creativity is lived as much as it is taught.
A human-sized proposition
For us at UCA Canterbury, the alternative is clear. Ours is a human-sized proposition: intimate, civic, distinctive. A place where students are known by name, where teaching is close, and where creativity is inseparable from civic life.
We intend that our graduates remain in creative professions for life, not because of economies of scale but because of the depth of their formation. Small institutions enable what scale cannot: intimacy, belonging, and the tolerance of difference. They cultivate attachment to place, the character of community, and the fragile conditions in which nuture and trust can grow. These are not marginal gains. They are the essence of education itself. Vulnerability, when named and advocated for, becomes strength.
This is the measure against which any super university must be judged: not whether it scales, but whether it sustains the human scale within it. The crisis in higher education is not only financial but cultural. It is about whether universities can still act as places of meaning, attachment, and public need.
Our founder, Sidney Cooper, understood in 1868 that education was not about scale but about purpose. That mission still speaks. In the shadow of consolidation and the spectre of Artificial Intelligence, what must endure is the human scale of learning and belonging.
The Trump administration, since returning to power in 2025, has escalated attacks on the foundations of democracy, the environment, world peace, human rights, and intellectual inquiry. While the administration has marketed itself as “America First,” its policies have more often meant profits for the ultra-wealthy, repression for the working majority, and escalating dangers for the planet.
Below is a running list of 100 of the most dangerous actions and policies—a record of how quickly a government can dismantle hard-won protections for people, peace, and the planet.
I. Attacks on the Environment
Withdrawing from the Paris Climate Agreement—again.
Dismantling the EPA’s authority to regulate greenhouse gases.
Opening federal lands and national parks to oil, gas, and mining leases.
Gutting protections for endangered species.
Allowing coal companies to dump mining waste in rivers and streams.
Rolling back vehicle fuel efficiency standards.
Subsidizing fossil fuel companies while defunding renewable energy programs.
Suppressing climate science at federal agencies.
Greenlighting pipelines that threaten Indigenous lands and water supplies.
Promoting offshore drilling in fragile ecosystems.
Weakening Clean Water Act enforcement.
Dismantling environmental justice programs that protect poor communities.
Politicizing NOAA and censoring weather/climate warnings.
Undermining international climate cooperation at the UN.
Allowing pesticides banned in Europe to return to U.S. farms.
II. Undermining World Peace and Global Stability
Threatening military action against Iran, Venezuela, and North Korea.
Expanding the nuclear arsenal instead of pursuing arms control.
Cutting funding for diplomacy and the State Department.
Withdrawing from the World Health Organization (WHO).
Weakening NATO alliances with inflammatory rhetoric.
Escalating drone strikes and loosening rules of engagement.
Providing cover for authoritarian leaders worldwide.
Walking away from peace negotiations in the Middle East.
Blocking humanitarian aid to Gaza, Yemen, and other war-torn areas.
Expanding weapons sales to Saudi Arabia despite human rights abuses.
Using tariffs and sanctions as blunt instruments against allies.
Politicizing intelligence briefings to justify military adventurism.
Abandoning refugee protections and asylum agreements.
Treating climate refugees as security threats.
Reducing U.S. participation in the United Nations.
III. Attacks on Human Rights and the Rule of Law
Expanding family separation policies at the border.
Targeting asylum seekers for indefinite detention.
Militarizing immigration enforcement with National Guard troops.
Attacking reproductive rights and defunding women’s health programs.
Rolling back LGBTQ+ protections in schools and workplaces.
Reinstating bans on transgender service members in the military.
Undermining voting rights through purges and voter ID laws.
Packing the courts with extremist judges hostile to civil rights.
Weaponizing the Justice Department against political opponents.
Expanding surveillance powers with little oversight.
Encouraging police crackdowns on protests.
Expanding use of federal troops in U.S. cities.
Weakening consent decrees against abusive police departments.
Refusing to investigate hate crimes tied to far-right violence.
Deporting long-term immigrants with no criminal record.
IV. Attacks on Domestic Peace and Tranquility
Encouraging militias and extremist groups with dog whistles.
Using inflammatory rhetoric that stokes racial and religious hatred.
Equating journalists with “enemies of the people.”
Cutting funds for community-based violence prevention.
Politicizing natural disaster relief.
Treating peaceful protests as national security threats.
Expanding federal use of facial recognition surveillance.
Undermining local control with federal overreach.
Stigmatizing entire religious and ethnic groups.
Promoting conspiracy theories from the presidential podium.
Encouraging violent crackdowns on labor strikes.
Undermining pandemic preparedness and response.
Allowing corporations to sidestep workplace safety rules.
Shutting down diversity and inclusion training across agencies.
Promoting vigilante violence through online platforms.
V. Attacks on Labor Rights and the Working Class
Weakening the Department of Labor’s enforcement of wage theft.
Blocking attempts to raise the federal minimum wage.
Undermining collective bargaining rights for federal workers.
Supporting right-to-work laws across states.
Allowing employers to misclassify gig workers as “independent contractors.”
Blocking new OSHA safety standards.
Expanding exemptions for overtime pay.
Weakening rules on child labor in agriculture.
Cutting unemployment benefits during economic downturns.
Favoring union-busting corporations in federal contracts.
Rolling back protections for striking workers.
Encouraging outsourcing of jobs overseas.
Weakening enforcement of anti-discrimination laws in workplaces.
Cutting funding for worker retraining programs.
Promoting unpaid internships as a “pathway” to jobs.
VI. Attacks on Intellectualism and Knowledge
Defunding the Department of Education in favor of privatization.
Attacking public universities as “woke indoctrination centers.”
Promoting for-profit colleges with predatory practices.
Restricting student loan forgiveness programs.
Undermining Title IX protections for sexual harassment.
Defunding libraries and public broadcasting.
Politicizing scientific research grants.
Firing federal scientists who contradict administration narratives.
Suppressing research on gun violence.
Censoring federal climate and environmental data.
Promoting creationism and Christian nationalism in schools.
Expanding surveillance of student activists.
Encouraging book bans in schools and libraries.
Undermining accreditation standards for higher education.
Attacking historians who challenge nationalist myths.
Cutting humanities funding in favor of military research.
Encouraging political litmus tests for professors.
Treating journalists as combatants in a “culture war.”
Promoting AI-driven “robocolleges” with no faculty oversight.
Gutting federal student aid programs.
Allowing corporate donors to dictate university policy.
Discouraging international students from studying in the U.S.
Criminalizing whistleblowers who reveal government misconduct.
Promoting conspiracy theories over peer-reviewed science.