Tag: case

  • The end of pretend – AI and the case for universities of formation

    The end of pretend – AI and the case for universities of formation

    I loved magic as a kid. Card tricks, disappearing coins, little felt rabbits in pretend top hats. “Now anyone can be a magician,” proclaimed the advert in the Argos catalogue. Ta da. Now that’s magic.

    I’d make pretend tickets, rearrange the seating in the front room, and perform shows for the family – slowly learning the dark arts of misdirection and manipulation along the way. When I performed, I generated pride.

    Over time I found that some of those skills could be used to influence people more generally – to make them feel better about themselves, to change their decisions, to trigger some kind of behaviour.

    Sometimes, I’d rationalise, as long as I was doing it for the right reasons, it was better if they didn’t know it was a trick. The end justified the means. Or did it?

    People love it when they know that magic is being performed as magic – the willing suspension of disbelief, the pleasure of being fooled by someone who’s earned the right to fool us. When they give permission to be illegitimately impressed, all is fine.

    But what they can’t stand is being lied to. We don’t like being deceived. Most political news in this country centres on who lied and about what. We’re obsessed with it.

    The cover-up is always worse than the crime, yet everyone still does it – they have to, they rationalise, to keep up, or to get permission. The gap between how things are and how we present them is the game.

    Once they’re in, it won’t matter that the sector painted an unobtainable picture of student life for applicants. Once the funding is secured, universities can fess up that it isn’t as good as government thought it would be later. Once the rules are published, better to ask for forgiveness over the impact on net migration – not permission.

    I think about that little magic set I got a lot, because so much of what AI does still sits for me in that “magic trick” space.

    Ta da. Look what it can do. Generate an essay, write a play, create some code, produce an image of the Pope in a puffer jacket. But the line between magic and lies is a slippery slope, because its number one use case is pretence.

    AI is used to lie – fake essays, fake expertise, fake competence. But mostly to make us look better, appear faster and seem wiser. The anxiety about being “found out” is the anxiety of the liar, not the audience at a magic show. Students worry they’ll be caught. Universities worry their degrees will be worthless.

    Everyone worries that the whole edifice of qualifications and signals and “I know something you don’t” will collapse under the weight of its own pretending. But the pretending was already there – AI just makes the tricks cheaper, and much harder to sustain.

    When I look back upon my life

    I’ve been in a particularly reflective mood recently – I turned 50 at the weekend (I can’t believe it either, it’s the moisturiser) and there’s something I can’t shake. When I look back upon my life, it’s always with a sense of shame.

    When I got accepted to the University of the West of England in the mid-nineties, grandparents on both sides were thrilled that I had “got into Bristol”. A few extra Bonusprint copies of the sunken lawn at the St Matthias campus helped.

    It hadn’t started as a deliberate lie – more a misunderstanding about where we had driven to on open days – but instead of correcting it, I doubled down.

    Nobody in my family had been to university, and I doubt they would have discerned the difference. But on some level I thought I had to prove that their financial support was for something rare. Something… special.

    Decades later I realised that the entire edifice of higher education runs on the same kind of slippage – the gap between what universities actually do, and the status they are assumed to have and confer.

    Applicants and their families celebrate “getting in” as if admission itself were the achievement. Parents frame graduation photos, the ceremony mattering more than the three years that preceded it. Employers use degree classifications as sorting mechanisms while moaning that the sort has not delivered the graduates they wanted. There’s a graduate premium. And so on.

    Those of us who write about higher education are no better. Our business model rests on “I know something you do not” – the insider knowledge, the things you haven’t noticed, the analysis you can’t get elsewhere. Scarcity of information, monetised. I’ve built a career on being the person in the room who has read the regulatory guidance.

    But now, suddenly, a machine can summarise the guidance in seconds. Not as well as I can – not yet, not always – but well enough to make me wonder what I am actually for. What value I bring. How good I am at… pretending.

    AI doesn’t create that anxiety. It exposes something that was always there – the fear that our value was never in what we knew, but in other people not knowing it. And that eventually, someone might find that out.

    It’s always with a sense of shame

    Back in 1995 my first (handwritten) university essay was about the way the internet lets you become someone you are not. Chatrooms were new and identity was suddenly fluid. You could lie about everything – your age, your appearance, your expertise – and checking was hard.

    The internet has been flooded with exaggeration ever since. Wish.com tat that looks nothing like the picture. LinkedIn profiles that bear no relationship to actual jobs. Influencers selling lives they don’t live in places that barely exist.

    But it has also liberated us. At UWE, I lived through the transition from index cards in libraries to DogPile, asking Jeeves and Google. The skill of navigating a card catalogue, of knowing which reference books to check – it felt essential, and then it was worthless. For one semester, we were told we weren’t allowed to use search engines. The faculty held on for a while, then let go.

    In my first year, I chose a module involving audio editing on reel-to-reel tape. Splicing, cutting, winding, knives. At the end of the year, I got a job helping to put the equipment in a skip. The skills I’d learned were obsolete before I graduated.

    Each time, there was a period of pretending that the old skills still mattered. Each time, the system eventually admitted they didn’t. Each time, something was revealed about what had actually been valuable all along. The card catalogue wasn’t the point – finding and evaluating information was. The handwriting wasn’t the point – thinking under pressure was. The reel-to-reel wasn’t the point – understanding how to shape a story with sound was.

    Now the sector clings on to exams, essays, and the whole apparatus of assessment that assumes that producing a thing proves you learned something. The system holds on – but for what?

    I’ve always been the one to blame

    If I rummage through the AI pitches that land at [email protected], I can see a familiar pattern.

    There are catastrophists. Students are cheating on an industrial scale. The essay is dead. Standards are collapsing and students are cognitively offloading while the great plagiarism machine works its magic.

    There are tech evangelists. Productivity gains, personalised learning, democratised access and emancipation – just so long as you don’t ask who is selling the tools, who is buying the data, or what happens to students who can’t afford the premium tier.

    Then there is the centrist-Dad middle. “It is neither all good nor all bad” – balance, nuance, thoughtful engagement, and very little about what any of this is actually for.

    The catastrophists are wrong because they assume what’s being bypassed was valuable – that the essay-writing, the exam-sitting, the problem-set-completing were the point rather than proxies for something else. If the activities can be replaced by a machine, what were they measuring?

    The evangelists are wrong because they assume more efficiency is always better – that if AI frees us from X, we’ll have more time to do Y. But they never say what Y is. Or whose time it becomes. In practice, we know – the efficiency dividend flows upward, and never shows up as an afternoon off.

    The balanced view is just as bad, because it pretends there’s no choice to be made. It lets us sound reasonable while avoiding the harder question – what is higher education for?

    At the high risk of becoming one of those bores at a conference whose “question” is a speech about that very issue, I do think there is a choice to be made. We ought at least to ask if universities exist to sort and qualify, or to form and transform. AI forces the question.

    For everything I long to do

    Let’s first admit a secret that would get me thrown out of the Magic Circle. The industrial model of education was built on scarcity, and scarcity made a certain kind of pretending possible.

    Information was scarce – held in libraries, transmitted by experts, accessible only to those who got through the door. A degree meant three years in proximity to information others could not reach.

    Attention was scarce – one lecturer, two hundred students, maybe a weekly seminar. The economics of mass higher education turned teaching into broadcast, not dialogue, but the scarcity, coupled with outcomes stats from the past, still conferred value.

    Feedback was scarce – assignments returned weeks later with a grade and a short paragraph. The delay and brevity made the judgement feel weighty, even oracular.

    In a scarcity system, hoarding makes sense. Knowledge is power precisely because others don’t have it. “I know something you do not” isn’t a bug – it’s the business model. But once something isn’t scarce any more, we have to search again for value.

    We’ve been here before. Calculators didn’t destroy maths – they revealed that arithmetic wasn’t the point. Google didn’t destroy research – it revealed that finding information wasn’t really the hard bit. Each time the anxiety was the same – students will cheat, standards will collapse, the thing we valued will be lost. Each time the pretending got harder to sustain.

    For me AI fits the pattern. Not because it knows everything – it obviously doesn’t. Its confident wrongness is one of its most dangerous features. But it makes a certain kind of information effectively free. Facts, frameworks, standard analyses are now available to anyone with an internet connection and the wit to ask.

    And it hurts to carefully build and defend systems that confer status on things humans can do – only to have something come along and relieve humans from having to do them. It causes a confrontation – with value.

    No matter when or where or who

    During the early days of Covid, I came across a Harvard Business School theory called Jobs To Be Done. People pay to get a job done, but organisations often misunderstand the real job they’re being paid to perform.

    As a kid, the Sinclair ZX Spectrum in our house was marketed as an educational tool – an invitation to become a programmer. Some did. Most, like me, worked out how to make the screen say rude words and then played games.

    Students have at least two jobs they want done. One is access to well-paid and meaningful work, made possible through obtaining a degree and supplied by academic programmes. The second is coming of age – the intoxicating combination of growing up and lifestyle. Becoming someone. Finding your people. Working out who you are when you’re not defined by your parents or your school.

    Universities have always provided both, but only dare attribute value to the first. The second is treated as incidental – “the student experience”, something that happens around the edges. But for many students, perhaps most, the second job is why they came. The qualification is the price of admission to three years of transformation.

    AI increasingly handles the first job – the information, the credentials, the sorting – more efficiently than universities ever could. If that were all universities offered, they’d already be obsolete. What AI can’t provide is the second job. It can’t help us become someone. It can’t introduce us to people who will change our lives. It can’t hold us accountable, or surprise us, or make us brave.

    During Covid, I argued that universities should cancel as much face-to-face teaching as possible – because it wasn’t working anyway – but keep campuses open. Not for teaching – for being. For studying together, bonding, bridging, belonging.

    I’ve not changed my view. AI just makes it more urgent. If the content delivery can be automated, the campus has to be for something else. That something else is formation.

    Has one thing in common, too

    A couple of years ago I came across Thomas Basbøll, resident writing consultant at Copenhagen Business School Library. He argues that when a human performs a cognitively sophisticated task – writes a compelling essay, analyses a complex case, synthesises disparate sources – we infer underlying competence. The performance becomes evidence of something deeper.

    When a machine performs the same task, we can’t make the inference. The machine has processes that produce outputs. It doesn’t “know” anything – it predicts tokens. The output might resemble what a knowledgeable human would produce, but it proves nothing about understanding.

    Education has always used performance as a proxy for competence. Higher education sets essays because it assumed that producing a good one required learning something. There was trust in the inference from output to understanding, and AI breaks it. The performance proves nothing.

    For many students, the performance was already disconnected from competence. Dave Cormier, from the University of Prince Edward Island, described the experience of essay writing in the search era as:

    have an argument, do a search for a quote that supports that position, pop the paper into Zotero to get the citation right, pop it in the paper. No reading for context. No real idea what the paper was even about.”

    There was always pretending. AI just automated it.

    Basbøll’s question still haunts me. What is it that we want students to be able to do on their own? Not “should we allow ChatGPT” – that battle is lost. What capacities, developed through practice and evidenced in assessment, do we actually care about?

    If the answer is that appearing literate is enough, then we might as well hand the whole thing to the machines. If the answer is that we want students to actually develop capacities, then universities will need to watch students doing things – synchronous engagement, supervised practice, assessment that can’t be outsourced. A shift that feels too resource-intensive for the funding model.

    What’s missing from both options is that neither is really about learning. One is about performing competence, the other is about proving competence under surveillance, but both still treat the output as the point. The system can’t ask what students actually learned, because it was never designed to find out. It was designed to sort.

    Everything I’ve ever done

    How hard should education be? The “meritocracy of difficulty” ties academic value to how hard a course is to survive – dense content, heavy workloads, high-stakes assessment used to filter and sort rather than support students. Go too far in the other direction, and it’s a pointless prizes-for-all game in which nobody learns a thing.

    Maybe the sorting and the signalling is the problem. The degree classification system was designed for an elite era where classification signalled that the graduate was better than other people. First class – exceptional. Third – joker. The whole apparatus assumes that the point of education is to prove that your Dad’s better than my Dad. See also the TEF.

    Everyone pretends about the workload. The credit system assumes thirty-five to forty hours per week for a full-time student. Students aren’t studying for anything like that. The gap is vast, everyone knows it, and nobody says it out loud because saying it would expose the fiction.

    AI intensifies it all. If students can automate the drudgery, they will – not because they’re lazy, but because they’re rational actors in a system that rewards outputs over process. If the system says “produce this essay” and the essay can be produced in ten minutes, why would anyone spend ten hours?

    Mark Twain might have said that he would never let his schooling interfere with his education. Today’s undergraduates would more often lament that they don’t can’t their lectures and seminars interfere with their part-time job that pays the rent.

    Every place I’ve ever been

    There’s a YouTube video about Czech railways that’s been stuck in my head for weeks now. They built a 200 km/h line between Prague and Budweis and held celebrations – the first domestic intercity service to break the 160 km/h barrier.

    But only one train per day actually runs at that speed. It arrives late every time. Passengers spend the whole journey anxious about missing their ten-minute connection at the other end.

    The Swiss do it differently. The Gotthard Base Tunnel was built for 230 km/h. Trains run at 200. The spare capacity isn’t wasted – it’s held in reserve. If a train enters the tunnel with a five-minute delay, it accelerates and emerges with only two. The tunnel eats delays. The result is connection punctuality of the kind where you almost always make your connection.

    The Czech approach is speed fetishism – make the easily marketable number bigger, and assume that’s improvement. The Swiss approach is reliability – build in slack, prioritise the journey over the metric, make sure people get where they’re going.

    It sometimes feels to me like UK universities have gone the Czech route. We’re the envy of the world on throughput – faster degrees, more students, tighter timetables, twelve-week modules with no room to fall behind.

    But when anything goes wrong – and things always go wrong – students miss their connections. A bad week becomes a failed module. A failed module becomes a resit year. A mental health crisis becomes a dropout. Then we blame them for lacking resilience, as if the problem were their character rather than a system designed with no slack.

    The formation model is the Swiss model. Slow down. Build in reserves. Let students recover from setbacks. Prioritise the journey over the metric. Accept that some things cannot be rushed.

    At school they taught me how to be

    Universities tell themselves similar lies about academics. It’s been obvious for a long time that the UK can’t sustain a system where researchers are also the teachers, the pastoral supporters, the markers and the administrators.

    The all-rounder academic – brilliant at research, compelling in lectures, attentive in tutorials, wise in pastoral care, efficient at marking, engaged in knowledge exchange – was always a fantasy, tolerable only when student numbers were small enough to hide the gaps.

    Massification stretched it. Every component became more complicated, with more onerous demands, while the mental model of what good looks like didn’t change. AI breaks it.

    If students automate essay production, academics can automate feedback. We’re already seeing AI marking tools that claim to do in seconds what takes hours. If both sides are pretending – students pretending to write, academics pretending to read – what’s left?

    The answer is – only the encounter. The tutorial where someone’s question makes you think again. The supervision where a half-formed idea gets taken seriously. The seminar where genuine disagreement produces genuine movement. The moments when people are present to each other, accountable to each other, and changed by each other.

    They can’t be automated. They also can’t be scaled in the way the current model demands. You can’t have genuine encounters at a ratio of one to two hundred. Nor can you develop judgement in a twelve-week module delivered to students whose names you don’t know.

    The alternative is differentiation – people who teach, people who research, people who coach, working in teams on longer-form problems rather than alone in offices marking scripts. But that requires admitting the all-rounder was always a lie, and restructuring everything around that admission.

    So pure in thought and word and deed

    If information is now abundant and feedback can be instant and personalised, then the scarcity model is dead. Good riddance. But abundance creates its own problems.

    Without judgement, abundance is useless. Knowing that something is the case is increasingly cheap. Any idiot with ChatGPT can generate an account of the causes of the First World War or the principles of contract law. But knowing what to do about it, whether to trust it, how it connects to everything else, which bits matter and which are noise – these remain expensive, slow, human.

    Judgement is not a skill you can look up. It’s a disposition you develop through practice – through getting things wrong and understanding why, through watching people who are better at it than you, through being held accountable by others who will tell you when you’re fooling yourself. AI can give us information. It can’t give us judgement.

    Abundance makes it harder to know what we don’t know. When information was scarce, ignorance was obvious. Now, ignorance is invisible. We can generate confident-seeming text on any topic without understanding anything about it. The gap between performance and competence widens.

    UCL’s Rose Luckin calls what’s needed “meta-intelligence” – not knowing things, but knowing how we know, knowing what we don’t know, and knowing how to find out. AI makes meta-intelligence more important, not less. If we can’t evaluate what the machine is giving us, we’re not using a tool. We’re being used by one.

    That’s the equity issue that most AI boosterism ignores. If you went to a school that taught you to think, AI is a powerful amplifier. If you went to a school that taught you to comply, AI is a way of complying faster without ever developing the capacities that would let you do otherwise.

    They didn’t quite succeed

    Cultivating judgement means designing curricula around problems that don’t have predetermined answers – not case studies where students are expected to reach the “right” conclusion, but genuine dilemmas where reasonable people disagree. It means assessment that rewards the quality of reasoning, not just the correctness of conclusions – teachers who model uncertainty, who think out loud, who change their minds in public.

    Creating communities of inquiry means spaces where people think together, are accountable to each other, and learn to be wrong in public. They can’t be scaled, and can’t be automated. They require presence, continuity, and trust built over time. AI can prepare us for these spaces. It can’t be one of them.

    Last week I was playing with a custom GPT with a group of student reps. We’d loaded it with Codes of Practice and housing law guidance, and for the first time they understood their rights as tenants – not deeply, not expertly, but enough to know what questions to ask and where to push back. They’d never have encountered this stuff otherwise.

    The custom GPT wasn’t the point – the curiosity it sparked was. They left wanting to know more, not less. That’s what democratised information synthesis can do when it’s not about producing outputs faster, but about opening doors others didn’t know existed.

    Father, forgive me

    There’s always been an irony in the complaint that graduates lack “soft skills”. For decades, employers demanded production – write the report, analyse the data, build the model. Universities obliged, orienting curricula around outputs and assessing students on their capacity to produce. Now that machines produce faster and cheaper, employers discover they wanted something else all along.

    They call it “soft skills” or “emotional intelligence” or “communication”. What they mean is the capacity to be present with other humans. To listen, to learn, to adapt – to work with people who are different from you, and to contribute to collective endeavours rather than produce outputs in isolation.

    It’s always irked me that they’re described as soft. They are the hardest skills to develop and the hardest to fake. They are also exactly what universities could have been cultivating all along – if anyone had been willing to name them and pay for them.

    Universities that grasp this can offer students, employers and society something they genuinely need – people who can think, who can learn, who can work with others, who can handle complexity and uncertainty. Employers will need to train them in their specific context, but they’ll be worth training. That’s a different value proposition than “job-ready graduates” – and a more honest one.

    I remember visiting the Saltire Centre at Glasgow Caledonian and being amazed that a university was brave enough to notice that students like studying together. Not just being taught together – studying together. The spaces that fill up fastest are the ones where people can work alongside others, help each other, and belong to something.

    It’s not a distraction from learning. It is learning. The same is true of SUs, societies, volunteering, representation – the “extracurricular” activities that universities tolerate but rarely celebrate. These are where students practise collective action, navigate difference, take responsibility for something beyond themselves. Formation happens in community, not just in classrooms.

    I tried not to do it

    Being brave enough to confront all this will be hard. The funding model rewards efficiency, the regulatory model rewards measurability, and the labour market wants qualifications. The incentive is to produce – people who can perform, not people who have developed.

    Students – many, not all – have internalised this logic. They want the degree, the credential, the signal. They are strategic, instrumental, and focused on outcomes. It’s not a character flaw – it’s a rational response to the system they’re in. If the degree is the point, then anything that gets you the degree efficiently is sensible. AI is just the latest efficiency tool.

    But while shame is a powerful disincentive to the fess up, the thing about pretending is that it’s exhausting. And it’s lonely.

    For years at Christmas, I pretended UWE was Bristol because I was ashamed – ashamed of wanting to study the media, ashamed of coming from a family where going to any university was exceptional, ashamed of the gap between where I was and where people felt I should be. The pretending was a way of managing the shame.

    I suspect a lot of students feel something similar. The performance of knowledge, the strategic deployment of qualifications, the constant positioning and comparison – these are ways of managing the fear that you’re not good enough, that you’ll be found out, that the gap between who you are and who you’re supposed to be is too wide to bridge.

    AI intensifies the fear for some – the terror that they’ll be caught, that the machine will be detected, that the pretending will be exposed. But it might offer a different possibility. If the pretending no longer works – if the performance can be automated and therefore has no value – then maybe the only thing left is to become someone who doesn’t need to pretend.

    And I still don’t understand

    That is the democratic promise of abundant information. Not that everyone will know everything – that’s neither possible nor desirable. But that knowledge can stop being a marker of status, a way of putting others down, or a resource to be hoarded. “I know something you don’t” can give way to “we can figure this out together.”

    The shift from knowledge as possession to knowledge as practice is a shift from “I have information you lack” to “I can work with you on problems that matter.” From education as credentialing to education as formation. From “I’m better than you” to “I can contribute.” From pretending to becoming.

    We’d need assessment that rewards contribution over reproduction. If the essay can be generated by AI, then the essay is testing the wrong thing. Assessment that requires students to think in real time, in dialogue, in response to genuine challenge – this is harder to automate and more valuable to develop. The individual student writing the individual essay marked by the individual academic is game over if AI can play both roles.

    We’ll need pedagogy that prioritises encounter over transmission. Small group teaching. Sustained relationships between students and teachers. Curricula designed around problems rather than content coverage. Something between a module and a course, run by teams, with long-form purpose over a year rather than twelve-week fragments. Time and space for the slow work of formation.

    We’ll need recognition that learning is social. Common spaces where students can study together. Student organisations supported rather than tolerated. Credit for service learning, for contribution to community, for the “extracurricular” activities where formation actually happens.

    We’ll need slack in the system. The Swiss model, not the Czech one. Space to fall behind and catch up. Multiple attempts at assessment. Pass/fail options that encourage risk-taking. Time built in for things to go wrong, because things always go wrong. A system that absorbs delays rather than compounding them.

    None of this will happen quickly. The funding model, the regulatory model, the labour market, the expectations students bring with them – they are not going to transform overnight. We’ll all have to play along for a while yet, doing the best we can within systems that reward the wrong things.

    But playing along is not the same as believing. And knowing what we’re playing along with – knowing what we’re compromising and why – is the beginning of something different.

    The end of pretending

    The reason I came to work here at Wonkhe – and the whole point of my work with students’ unions over the years – has been about giving power away. Not hoarding insight, but spreading it. Not being the person who knows things – but helping other people act on what they now know.

    The best email I got last week wasn’t someone telling me that I was impressive, or clever. I’ve learned how to get those emails. It was someone saying “really great notes and really great meeting – has got our brains whirring a lot.” Using what I offered to do something I couldn’t have done myself.

    Maybe I’ve become one of those insufferable men who grab the mic to assert that what education is for is what it did for them. But the purpose of teaching is surely rousing curiosity and creating the conditions for people to become.

    When I look back at the version of myself who told his family he was going to Bristol, I feel compassion more than embarrassment. He was doing the best he could in a system that made pretending rational.

    Thirty years on, I’ve watched skills become obsolete, formats get put in the skip, pretences exposed. Each time we held on for a while. Each time we eventually let go. Each time something was revealed about what had actually mattered all along.

    AI doesn’t end the system of pretending. But it does expose its contradictions in ways that might, eventually, make something better possible. If the performance of knowledge becomes worthless, then maybe actual formation – and the human encounters that produce it – can finally be valued.

    The hopeful answer is that universities can be places where people become more fully human. Not because they acquire more information, or even because they become subject specialists – though many will – but because they develop the capacities for thought, action, connection and care that make a human life worth living.

    They are capacities that can’t be downloaded, nor automated, nor faked. They can be developed only slowly, in relationship, through practice, with friction.

    You came to university for skills and they turned out to be useless? That’s a trick. You came for skills and left ready to change the world? Now that’s magic.

    Continue the conversation at The Secret Life of Students: Learning to be human in the age of AI – 17 March, London. Find out more and book.

    Source link

  • The case for treating adults as adults when it comes to AI chatbots

    The case for treating adults as adults when it comes to AI chatbots

    For many people, artificial intelligence chatbots make daily life more efficient. AI can manage calendars, compose messages, and provide quick answers to all kinds of questions. People interact with AI chatbots to share thoughts, test ideas, and explore language. This technology, in various ways, is playing a larger and larger role in how we think, work, and express ourselves. 

    But not all the news is good, and some people want to use the law to crack down on AI.

    Recent news reports describe a wave of lawsuits alleging that OpenAI’s generative AI chatbot, ChatGPT, caused adult users psychological distress. The filings reportedly seek monetary damages for people who conversed at length with a chatbot’s simulated persona and reported experiencing delusions and emotional trauma. In one reported case, a man became convinced that ChatGPT was sentient and later took his own life. 

    These situations are tragic and call for genuine compassion. Unfortunately, if these lawsuits succeed, they’ll effectively impose an unworkable expectation on anyone creating a chatbot to scrub anything that could trigger its most vulnerable users. Everyone, even fully capable adults, would be effectively treated as if they are on suicide watch. That’s a standard that would chill open discourse.

    Adults are neither impervious to nor helpless before AI’s influence on their lives and minds, but treating them like minors is not the solution.

    Like the printing press, the telegraph, and the internet before it, artificial intelligence is an expressive tool. A prompt, an instruction, or even a casual question reflects a user’s intent and expressive choice. A constant across its many uses is human agency — because it is ultimately a person that ends up deciding what to ask, what responses to keep, what results to share, and how to use the material it develops. Just like the communicative technologies of the past, AI has the potential to amplify human speech rather than replace it, bringing more storytellers, perspectives, and critiques with it. 

    Every new expressive medium in its time has faced public scrutiny and renewed calls for government intervention. After the famous 1938 Orson Welles’ “War of the Worlds” radio broadcast about a fictional alien invasion, for example, the Federal Communications Commission received hundreds of complaints urging the government to step in. Many letters expressed fear that this technology can deceive and destabilize people. Despite the panic, neither the broadcaster nor Welles, who went on to cinematic fame, faced any formal consequences. As time went on, the dire predictions never materialized.

    Early panic rarely aligns with long-term reality. Much of what once seemed threatening eventually found its place in civic life, revolutionizing our ability to communicate and connect. This includes radio dramas, comic books, TV, and the early web. 

    The attorneys filing lawsuits against these AI companies argue that AI is a product, and if a product predictably causes harm, safeguards are expected, even for adults. But when the “product” is speech, that expectation meets real constitutional limits. Even when harm seemed foreseeable, courts have long refused to hold speakers liable for the psychological effects of their speech on people that choose to engage with it. For example, composing rap lyrics or televising reports of violence can’t get you sued for the effects of listening or viewing them, even if they trigger people to act out.

    This principle is necessary to protect free expression. Penalizing people for the emotional or psychological impact of their speech invites the government to police the ideas, too. Recent developments in the UK shows how this can play out. Under laws that criminalize speech causing “alarm or distress,” people in England and Wales can be fined, aggressively prosecuted, or both, based entirely on the state’s claimed authority to measure the emotional “impact” of what was said. That’s not a model we should import. 

    A legal framework worthy of a free society should reflect confidence in adults’ ability to pursue knowledge without government intrusion, and this includes the use of AI tools. Extending child-safety laws or similar liability standards to adult conversations with AI would erode that freedom.

    Government AI regulation could censor protected speech online

    A Texas teen’s AI deepfake ordeal inspired the Take It Down Act — but its vague wording risks sweeping censorship.


    Read More

    The same constitutional protections apply when adults interact with speech, even speech generated by AI. That’s because the First Amendment ensures that we meet challenging, misleading, or even false ideas with more speech rather than censorship. More education and debate are the best means to preserve adults’ ability to judge ideas for themselves. It also prevents the state from deciding which messages are too dangerous for people to hear — a power that, if granted, can and will almost certainly be abused and misused. This is the same principle that secures Americans’ right to read subversive books, hear controversial figures speak, and engage with ideas that offend others.

    Regulating adult conversations with AI blurs the line between a government that serves its citizens and one that supervises them. Adulthood presumes the capacity for judgment, including the freedom to err. Being mistaken or misguided is all part of what it means to think and speak for oneself.

    At FIRE, we see this dynamic play out daily on college campuses. These institutions of higher education are meant to prepare young adults for citizenship and self-governance, but instead they often treat students as if discomfort and disagreement are radioactive. Speech codes and restrictions on protests, justified as shields against harm, teach dependence on authority and distrust of one’s own resilience. That same impulse is now being echoed in calls for AI chatbot regulation.

    Yes, words can do harm, even in adulthood. Still, not every harm can be addressed in court or by lawmakers, especially not if it means restricting free expression. Adults are neither impervious to nor helpless before AI’s influence on their lives and minds, but treating them like minors is not the solution.

    Source link

  • The Case Against AI Disclosure Statements (opinion)

    The Case Against AI Disclosure Statements (opinion)

    I used to require my students submit AI disclosure statements any time they used generative AI on an assignment. I won’t be doing that anymore.

    From the beginning of our current AI-saturated moment, I leaned into ChatGPT, not away, and was an early adopter of AI in my college composition classes. My early adoption of AI hinged on the need for transparency and openness. Students had to disclose to me when and how they were using AI. I still fervently believe in those values, but I no longer believe that required disclosure statements help us achieve them.

    Look. I get it. Moving away from AI disclosure statements is antithetical to many of higher ed’s current best practices for responsible AI usage. But I started questioning the wisdom of the disclosure statement in spring 2024, when I noticed a problem. Students in my composition courses were turning in work that was obviously created with the assistance of AI, but they failed to proffer the required disclosure statements. I was puzzled and frustrated. I thought to myself, “I allow them to use AI; I encourage them to experiment with it; all I ask is that they tell me they’re using AI. So, why the silence?” Chatting with colleagues in my department who have similar AI-permissive attitudes and disclosure requirements, I found they were experiencing similar problems. Even when we were telling our students that AI usage was OK, students still didn’t want to fess up.

    Fess up. Confess. That’s the problem.

    Mandatory disclosure statements feel an awful lot like a confession or admission of guilt right now. And given the culture of suspicion and shame that dominates so much of the AI discourse in higher ed at the moment, I can’t blame students for being reluctant to disclose their usage. Even in a class with a professor who allows and encourages AI use, students can’t escape the broader messaging that AI use should be illicit and clandestine.

    AI disclosure statements have become a weird kind of performative confession: an apology performed for the professor, marking the honest students with a “scarlet AI,” while the less scrupulous students escape undetected (or maybe suspected, but not found guilty).

    As well intentioned as mandatory AI disclosure statements are, they have backfired on us. Instead of promoting transparency and honesty, they further stigmatize the exploration of ethical, responsible and creative AI usage and shift our pedagogy toward more surveillance and suspicion. I suggest that it is more productive to assume some level of AI usage as a matter of course, and, in response, adjust our methods of assessment and evaluation while simultaneously working toward normalizing the usage of AI tools in our own work.

    Studies show that AI disclosure carries risks both in and out of the classroom. One study published in May reports that any kind of disclosure (both voluntary and mandatory) in a wide variety of contexts resulted in decreased trust in the person using AI (this remained true even when study participants had prior knowledge of an individual’s AI usage, meaning, the authors write, “The observed effect can be attributed primarily to the act of disclosure rather than to the mere fact of AI usage.”)

    Another recent article points to the gap present between the values of honesty and equity when it comes to mandatory AI disclosure: People won’t feel safe to disclose AI usage if there’s an underlying or perceived lack of trust and respect.

    Some who hold unfavorable attitudes toward AI will point to these findings as proof that students should just avoid AI usage altogether. But that doesn’t strike me as realistic. Anti-AI bias will only drive student AI usage further underground and lead to fewer opportunities for honest dialogue. It also discourages the kind of AI literacy employers are starting to expect and require.

    Mandatory AI disclosure for students isn’t conducive to authentic reflection but is instead a kind of virtue signaling that chills the honest conversation we should want to have with our students. Coercion only breeds silence and secrecy.

    Mandatory AI disclosure also does nothing to curb or reduce the worst features of badly written AI papers, including the vague, robotic tone; the excess of filler language; and, their most egregious hallmark, the fabricated sources and quotes.

    Rather than demanding students confess their AI crimes to us through mandatory disclosure statements, I advocate both a shift in perspective and a shift of assignments. We need to move from viewing students’ AI assistance as a special exception warranting reactionary surveillance to accepting and normalizing AI usage as a now commonplace feature of our students’ education.

    That shift does not mean we should allow and accept any and all student AI usage. We shouldn’t resign ourselves to reading AI slop that a student generates in an attempt to avoid learning. When confronted with a badly written AI paper that sounds nothing like the student who submitted it, the focus shouldn’t be on whether the student used AI but on why it’s not good writing and why it fails to satisfy the assignment requirements. It should also go without saying that fake sources and quotes, regardless of whether they are of human or AI origin, should be called out as fabrications that won’t be tolerated.

    We have to build assignments and evaluation criteria that disincentivize the kinds of unskilled AI usage that circumvent learning. We have to teach students basic AI literacy and ethics. We have to build and foster learning environments that value transparency and honesty. But real transparency and honesty require safety and trust before they can flourish.

    We can start to build such a learning environment by working to normalize AI usage with our students. Some ideas that spring to mind include:

    • Telling students when and how you use AI in your own work, including both successes and failures in AI usage.
    • Offering clear explanations to students about how they could use AI productively at different points in your class and why they might not want to use AI at other points. (Danny Liu’s Menus model is an excellent example of this strategy.)
    • Adding an assignment such as an AI usage and reflection journal, which offers students a low-stakes opportunity to experiment with AI and reflect upon the experience.
    • Adding an opportunity for students to present to the class on at least one cool, weird or useful thing that they did with AI (maybe even encouraging them to share their AI failures, as well).

    The point with these examples is that we are inviting students into the messy, exciting and scary moment we all find ourselves in. They shift the focus away from coerced confessions to a welcoming invitation to join in and share their own wisdom, experience and expertise that they accumulate as we all adjust to the age of AI.

    Julie McCown is an associate professor of English at Southern Utah University. She is working on a book about how embracing AI disruption leads to more engaging and meaningful learning for students and faculty.

    Source link

  • From Task Completion to Cognitive Engagement: Making the Case for the Hourglass Paradigm of Learning – Faculty Focus

    From Task Completion to Cognitive Engagement: Making the Case for the Hourglass Paradigm of Learning – Faculty Focus

    Source link

  • From Task Completion to Cognitive Engagement: Making the Case for the Hourglass Paradigm of Learning – Faculty Focus

    From Task Completion to Cognitive Engagement: Making the Case for the Hourglass Paradigm of Learning – Faculty Focus

    Source link

  • Jury Awards $6M in CSU Harassment Case

    Jury Awards $6M in CSU Harassment Case

    The California State University system must pay $6 million to a former official at Cal State San Bernardino who accused administrators of harassment, The San Bernardino Sun reported.

    Anissa Rogers, a former associate dean at CSUSB’s Palm Desert campus from 2019 through 2022, alleged that she and other female employees were subjected to “severe or pervasive” gender-based harassment by system officials. Rogers alleged she observed unequal treatment of female employees by university administrators, which was never investigated when she raised concerns. Instead, Rogers said, she was forced to resign after expressing concerns.

    Rogers and Clare Weber, the former vice provost of the Palm Desert campus, sued the system and two San Bernardino officials in 2023. Weber alleged in the lawsuit that she was fired after expressing concerns about her low pay compared to male counterparts with similar duties.

    That lawsuit was later split, and Weber’s case is reportedly expected to go to trial next year.

    “Dr. Rogers stood up not only for herself, but also the other women who have been subjected to gender-based double standards within the Cal State system,” Courtney Abrams, the plaintiff’s attorney, told The San Bernadino Sun following a three-week trial in Los Angeles Superior Court.

    A Cal State San Bernardino spokesperson told the newspaper that CSUSB was “disappointed by the verdict reached by the jury” and “we will be reviewing our options to assess next steps.”

    Source link

  • Protecting Every Marketing Dollar: How Collegis Helped Block $2.2M in Ad Waste with CHEQ [CASE STUDY]

    Protecting Every Marketing Dollar: How Collegis Helped Block $2.2M in Ad Waste with CHEQ [CASE STUDY]

    CHEQ is trusted by more than 15,000 companies — from the Fortune 50 to emerging disruptors — to enable and protect each critical touchpoint in the evolving, human-AI customer journey. Powered by the only integrated Traffic, Threat, and Identity Intelligence Engine, CHEQ distinguishes legitimate users from bad actors — human, AI agent, or bot — and, in real-time, delivers granular, context-specific insights to marketing, commerce, and security platforms. With a best-in-class

    Source link

  • The case for collaborative purchasing of digital assessment technology

    The case for collaborative purchasing of digital assessment technology

    Higher education in the UK has a solid background in leveraging scale in purchasing digital content and licenses through Jisc. But when it comes to purchasing specific technology platforms higher education institutions have tended to go their own way, using distinct specifications tailored to their specific needs.

    There are some benefits to this individualistic approach, otherwise it would not have become the status quo. But as the Universities UK taskforce on transformation and efficiency proclaims a “new era of collaboration” some of the long standing assumptions about what can work in a sharing economy are being dusted off and held up to the light to see if they still hold. Efficiency – including finding ways to realise new forms of value but with less overall resource input – is no longer a nice to have; it’s essential for the sector to remain sustainable.

    At Jisc, licensing manager Hannah Lawrence is thinking about the ways that the sector’s digital services agency can build on existing approaches to collective procurement towards a more systematic collaboration, specifically, in her case, exploring ideas around a collaborative route to procurement for technology that supports assessment and feedback. Digital assessment is a compelling area for possible collaboration, partly because the operational challenges are fairly consistent between institutions – such as exam security, scalability, and accessibility – but also because of the shared pedagogical challenge of designing robust assessments that take account of the opportunities and risks of generative AI technology.

    The potential value in collaboration isn’t just in cost savings – it’s also about working together to test and pilot approaches, and share insight and good practice. “Collaboration works best when it’s built on trust, not just transaction,” says Hannah. “We’re aiming to be transparent and open, respecting the diversity of the sector, and making collaboration sustainable by demonstrating real outcomes and upholding data handling standards and ethics.” Hannah predicts that it may take several years to develop an initial iteration of joint procurement mechanism, in collaboration with a selection of vendors, recognising that the approach could evolve over years to offer “best on class” products at a competitive price to institutions who participate in collective procurement approaches.

    Reviewing the SIKTuation

    One way of learning how to build this new collaborative approach is to look to international examples. In Norway, SIKT is the higher education sector’s shared services agency. SIKT started with developing a national student information system, and has subsequently rolled out, among other initiatives, national scientific and diploma archives, and a national higher education application system – and a national tender for digital assessment.

    In its first iteration, when the technology for digital assessment was still evolving, three different vendors were appointed, but in the most recent version, SIKT appointed one single vendor – UNIwise – as the preferred supplier for digital assessment for all of Norwegian higher education. Universities in Norway are not required to follow the SIKT framework, of course, but there are significant advantages to doing so.

    “Through collaboration we create a powerful lobby,” says Christian Moen Fjære, service manager at SIKT. “By procuring for 30,000 staff and 300,000 students we can have a stronger voice and influence with vendors on the product development roadmap – much more so than any individual university. We can also be collectively more effective in sharing insight across the network, like sample exam questions, for example.” SIKT does not hold views about how students should be taught, but as pedagogy and technology become increasingly intertwined, SIKT’s discussions with vendors are typically informed by pedagogical developments. Christian explains, “You need to know what you want pedagogically to create the specification for the technical solution – you need to think what is best for teaching and assessment and then we can think how to change software to reflect that.”

    For vendors, it’s obviously great to be able to sell your product at scale in this way but there’s more to it than that – serving a critical mass of buyers gives vendors the confidence to invest in developing their product, knowing it will meet the needs of their customers. Products evolve in response to long-term sector need, rather than short-term sales goals.

    SIKT can also flex its muscles in negotiating favourable terms with vendors, and use its expertise and experience to avoid pitfalls in negotiating contracts. A particularly pertinent example is on data sharing, both securing assurances of ethical and anonymous sharing of assessment data, and clarity about ultimate ownership of the data. Participants in the network can benefit from a shared data pool, but all need to be confident both that the data will be handled appropriately and that ultimately it belongs to them, not the vendor. “We have baked into the latest requirements the ability to claw back data – we didn’t have this before, stupid, right?” says Christian. “But you learn as the needs arise.”

    Difference and competition

    In the UK context, the sector needs reassurance that diversity will be accommodated – there’s a wariness of anything that looks like it might be a one-size-fits-all model. While the political culture in Norway is undoubtedly more collectivist than in the UK, Norwegian higher education institutions have distinct missions, and they still compete for prestige and to recruit the best students and staff.

    SIKT acknowledges these differences through a detailed consultation process in the creation of national tenders – a “pre-project” on the list of requirements for any technology platform, followed by formal consultation on the final list, overseen by a steering group with diverse sector representation. But at the end of the day to realise the value of joining up, there does need to be some preparedness to compromise, or to put it another way, to find and build on areas of similarity rather than over-refining on what can often be minor differences. Having a coordinating body like SIKT convene the project helps to navigate these issues. And, of course, some institutions simply decide to go another way, and pay more for a more tailored product. There is nothing stopping them from doing so.

    As far as SIKT is concerned, competition between institutions is best considered in the academic realm, in subjects and provision, as that is what benefits the student. For operations, collaboration is more likely to deliver the best results for both institutions and students. But SIKT remains agnostic about whether specific institutions have a different view. “We don’t at SIKT decide what counts as competitive or not,” says Christian. “Universities will decide for themselves whether they want to get involved in particular frameworks based on whether they see a competitive advantage or some other advantage from doing so.”

    The medium term horizon for the UK sector, based on current discussions, is a much more networked approach to the purchase and utilisation of technology to support learning and teaching – though it’s worth noting that there is nothing stopping consortia of institutions getting together to negotiate a shared set of requirements with a particular vendor pending the development of national frameworks. There’s no reason to think the learning curve even needs to be especially steep – while some of the technical elements could require a bit of thinking through, the sector has a longstanding commitment to sharing and collaboration on high quality teaching and learning, and to some extent what’s being talked about right now is mostly about joining the dots between one domain and another.

    This article is published in association with UNIwise. For further information about UNIwise and the opportunity to collaborate contact Tim Peers, Head of Partnerships.

    Source link

  • WEEKEND READING: University Collaboration – the case for admissions and professional registration  

    WEEKEND READING: University Collaboration – the case for admissions and professional registration  

    This HEPI guest blog was kindly written by James Seymour, who runs an education consultancy focusing on marketing, student recruitment, admissions and reputation and Julie Kelly who runs a higher education consultancy specialising in registry and governance challenges. Julie and James have worked for a range of universities at Director level in recent years.  

    The Challenge  

    All through August and September, many admissions and faculty/course teams have been working hard to get thousands of new students over the line and onto the next stage of their lives. It is more than just their UCAS application, interview, selection and firm acceptance or journey through Clearing – they have to actually enrol and succeed too.  

    Many of these students are training to be nurses, teachers, paramedics, social workers and doctors amongst many other allied health professional and education courses. They all need to go through essential and important Professional, Statutory and Regulatory Body (PSRB) requirements and additional compliance checks, from passports, to Disclosure and Barring Service questionnaires, to health questionnaires and more. Many are mature students who must demonstrate GCSE or equivalent competency at Grade C/4 or above. They are less likely to have support navigating this process as they are less likely to be in full-time education.  

    Most of these applicants have already been interviewed, attended selection days or Multiple Mini Interviews – MMIs (like selection speed dating) involving lots of competency stations.  

    These health students also must apply for their Student Finance loans in good time to trigger the all-important £5K+ NHS learning support fund – essential to enable them to succeed and even get to their clinical placements via bus, train or car.  

    It’s a very onerous process for applicants, their supporters, and the academic, admissions, and compliance teams, who must arrange and record all of this.  

    Clearly, getting all this information recorded and verified is important, but does it have to be so admin-heavy and time-consuming? Are we putting up barriers and disincentives deterring students from starting their studies?  

    At present, we have an inconsistent mess, often involving email and incessant chasing.  

    There has to be a better way  

    Over the last 10 years we have been involved in a number of process improvement/student journey projects at a number of UK universities.  In our experience it takes at least five times longer to admit a Nurse compared to a Business, Law or English student, and at least twice as long compared to a creative arts student who submits their portfolio for interview and review. Data from The Student Loans Company indicates that at least 25% of all new students only apply for their loans on or after results day in August – presenting real risk of delays in getting their money in time for enrolment.  

    Typically, only 85-90% of Nurses and other key NHS-backed students who have a confirmed UCAS place in August actually enrol in September. Another 3-5% have left before January.  

    This is not all about motivation or resilience – part of the issue is linked to getting these students over the line with all the additional hoops they have to jump through.  

    Another issue is around wasted resource across the sector and a poor student experience.  A student typically applies to their five UCAS choices, and many universities undertake the additional PSRB checks during the admission process.  A student is therefore having to supply their information to multiple institutions, which then need to be processed for students who may never actually enrol.  Surely it is better for students to supply this information once during the initial application stage? 

    Postgraduate Teachers including PGCE and Teach First students have to navigate a gov.uk application process (rather than UCAS) which feels like completing your tax return. A daunting and clunky first step to train in one of the most important careers any of us will ever do. They also only get three choices for courses that start in early September – only 2-3 weeks after many final year degree results are confirmed, putting undue pressure both on students, schools and institutions alike. 

    It’s clear that in the context of improving efficiency, eventual enrolment and reducing stress for all, a more collaborative approach across UK HE and professional training would be a real win. The same issues apply for onboarding, applications and selection for degree and higher apprenticeships.  

    The NHS workforce plan signals a clear need to train more Nurses and other key NHS staff and we know that teacher recruitment targets have been missed again this year.  

    Solutions and Future Projects 

    In the context of collaboration between universities, NHS, UKVI, UCAS and DfE we propose some key, essential ways to improve the process and increase the pipeline of future health and education professionals.  

    1. Create a safe, secure one-stop shop for PSRB checks, uploads and compliance so that students do it once and can be shared with all their university choices and options. There are a number of Ed Tech companies as well as UCAS, providing portals for applicants and the Gov.uk system is already improving each year.  
    1. As well as the process, revisit the timeline for applications and compliance for NHS and other PSRB courses – if this is all checked and ready by April-May and directly linked up to Student Finance Applications and/or NHS bursary support – far more students would be able to enrol, train and be ready to learn.  This would require proper process mapping and joined up thinking across different government departments, UCAS and universities themselves.  
    1. The HE sector and NHS should collectively review the factors, groups and critical incidents affecting non-enrolment and first year drop out – nationally and across all PSRB courses – and work at pace to ‘fix the leaks’ accordingly. At present these data sets are not shared or acted upon across the UK but only via individual universities, trusts and occasionally at conferences and sector meetings.  
    1. UCAS and exam boards need to urgently bring forward automatic sharing of GCSE results via the ABL system so that universities and applicants can be assured of level 2 qualifications.  
    1. Look at alternatives to the ‘doom loop’ of GCSE Maths and English retakes and essential requirement for entry to NHS and other professional courses. There are already alternative qualifications including Functional Skills and these need to be amplified, so more students are able to get over the line and start training.  
    1. Universities should work together not against each other. Each university or training provider spends many tens of thousands each year on recruitment campaigns.  For Nursing degrees alone, we estimate this to be at least £1M per year; pooling just 10% of this figure to ensure a consistent brand and overarching campaign would widen the pool of applicants rather than pit universities against each other.  
    1. Review the application process for Postgraduate Teacher Training – consider whether it should be given back to UCAS or another tech platform to improve visibility, choice, applicant journey and eventual enrolment figures.  Clearly only three choices is not enough with some providers being more efficient than others in responding to applicants and dealing with application volumes. The resulting bottlenecks impact on applicant confidence in the system. The early September start date for PG teaching courses also needs a review.  Apart from the application time pressure, these students are also starting before the campus (and school?) is truly ready for the start of term.  Why not start with the rest of their peers at the end of September and also introduce a January start point as an option? 
    1. Make funding more consistent and long term – at present universities are only paid to train students based on first year intake each year, leading to short term decisions, volatility and competition. The LLE due in 2027 is unlikely to lead to flexibility in PSRB course transfer. Giving universities and health trusts a 3-4 year funding model would iron out that volatility, encourage new entrants and provide certainty to invest in facilities, staff and support to train those students.  

    Conclusion and next steps  

    As the HE sector looks back on admission and enrolment for the 2025/26 academic year and prepares for 2026/27 entry we feel that something must change to enhance the admission process for PSRB courses, all of which are critical to the future of the UK.  

    The practical steps and ideas included within the article are all deliverable but need joined-up thinking across different parts of the process. We propose establishing a working group or task force to address quick wins and consider a roadmap for addressing longer-term solutions. 

    Source link

  • Amy Wax’s Case Against Penn Dismissed

    Amy Wax’s Case Against Penn Dismissed

    Jumping Rocks/Universal Images Group/Getty Images

    A Pennsylvania district judge dismissed a lawsuit Thursday against the University of Pennsylvania filed by Amy Wax, a tenured law professor who was suspended for the 2025–26 academic year on half pay as part of a punishment for years of flagrantly racist, sexist, xenophobic and homophobic remarks. 

    University of Pennsylvania

    In the suit filed in January, Wax claimed that the university discriminated against her by punishing her—a white Jewish woman—for speech about Black students but not punishing pro-Palestinian faculty members for speech that allegedly endorsed violence against Jews.

    “As much as Wax would like otherwise, this case is not a First Amendment case. It is a discrimination case brought under federal antidiscrimination laws,” senior U.S. district judge Timothy Savage wrote in a 16-page opinion. “We conclude Wax has failed to allege facts that show that her race was a factor in the disciplinary process and there is no cause of action under federal anti-discrimination statutes based on the content of her speech.”

    Savage also refuted Wax’s argument that the court should view “her comments disparaging Black students as a statement on behalf of a protected class.”

    “Nothing in the disciplinary process or her comments leads to the conclusion that she was penalized for associating with a protected class. Her comments were not advocacy for protected classes,” he wrote. “They were negative and directed at protected classes. Criticizing minorities does not equate to advocacy for them or for white people. Her claim that criticism of minorities was a form of advocating for them is implausible.”

    Wax was sanctioned in September 2024 after a years-long disciplinary battle over a laundry list of offensive statements she made during her tenure at the law school, including that “gay couples are not fit to raise children,” “Mexican men are more likely to assault women” and that it is “rational to be afraid of Black men in elevators.” Wax has worked at the law school since 2001.

    In addition to a one-year suspension on half pay, the school eliminated her summer pay in perpetuity, publicly reprimanded her and took away her named chair. In 2018, she was removed from teaching required courses after commenting on the “academic performance and grade distributions of the Black students in her required first-year courses,” according to former dean of the law school Theodore W. Ruger.

    Source link