Category: big tech

  • The Poisoning of the American Mind

    The Poisoning of the American Mind

    For more than a decade, Americans have been told that polarization, mistrust, and civic fragmentation are organic byproducts of cultural change. But the scale, speed, and persistence of the damage suggest something more deliberate: a sustained poisoning of the American mind—one that exploits structural weaknesses in education, media, technology, and governance.

    This poisoning is not the work of a single actor. It is the cumulative result of foreign influence campaigns, profit-driven global technology platforms, and domestic institutions that have failed to defend democratic literacy. Higher education, once imagined as a firewall against mass manipulation, has proven porous, compromised, and in many cases complicit.

    Foreign Influence as Cognitive Warfare

    Chinese and Russian influence operations differ in style but converge in purpose: weakening American social cohesion, degrading trust in institutions, and normalizing cynicism.

    Russian efforts have focused on chaos. Through state-linked troll farms, bot networks, and disinformation pipelines, Russian actors have amplified racial grievances, cultural resentments, and political extremism on all sides. The objective has not been persuasion so much as exhaustion—flooding the information environment until truth becomes indistinguishable from propaganda and democratic participation feels futile.

    Chinese influence efforts, by contrast, have emphasized discipline and control. Through economic leverage, academic partnerships, Confucius Institutes, and pressure campaigns targeting universities and publishers, the Chinese Communist Party has sought to shape what can be discussed, researched, or criticized. While less visibly inflammatory than Russian disinformation, these efforts quietly narrow the boundaries of acceptable discourse—especially within elite institutions that prize funding and global prestige.

    Both strategies treat cognition itself as a battlefield. The target is not simply voters, but students, scholars, journalists, and future professionals—anyone involved in shaping narratives or knowledge.

    The Role of Global Tech Elites

    Foreign influence campaigns would be far less effective without the infrastructure built and defended by global technology elites.

    Social media platforms were designed to monetize attention, not to preserve truth. Algorithms reward outrage, tribalism, and repetition. Misinformation is not an accidental byproduct of these systems; it is a predictable outcome of engagement-driven design.

    What is often overlooked is how insulated tech leadership has become from the social consequences of its products. Executives who speak fluently about “free expression” and “innovation” operate within gated communities, private schools, and curated information environments. The cognitive pollution affecting the public rarely touches them directly.

    At the same time, these platforms have shown inconsistent willingness to confront state-sponsored manipulation. Decisions about content moderation, data access, and platform governance are routinely shaped by geopolitical calculations and market access—particularly when China is involved. The result is a global information ecosystem optimized for profit, vulnerable to manipulation, and hostile to slow, evidence-based thinking.

    Higher Education’s Failure of Defense

    Universities were supposed to be inoculation centers against mass manipulation. Instead, they have become transmission vectors.

    Decades of underfunding public higher education, adjunctification of faculty labor, and administrative bloat have weakened academic independence. Meanwhile, elite institutions increasingly depend on foreign students, donors, and partnerships, creating subtle but powerful incentives to avoid controversy.

    Critical thinking is often reduced to branding rather than practice. Students are encouraged to adopt identities and positions rather than interrogate evidence. Media literacy programs, where they exist at all, are thin, optional, and disconnected from the realities of algorithmic persuasion.

    Even worse, student debt has turned higher education into a high-stakes compliance system. Indebted graduates are less likely to challenge employers, institutions, or dominant narratives. Economic precarity becomes cognitive precarity.

    A Domestic Willingness to Be Deceived

    Foreign adversaries and tech elites exploit vulnerabilities, but they did not create them alone. The poisoning of the American mind has been enabled by domestic actors who benefit from confusion, resentment, and distraction.

    Political consultants, partisan media ecosystems, and privatized education interests profit from outrage and ignorance. Complex structural problems—healthcare, housing, inequality, climate—are reframed as cultural battles, keeping attention away from systems of power and extraction.

    In this environment, truth becomes negotiable, expertise becomes suspect, and education becomes a consumer product rather than a public good.

    The Long-Term Consequences

    The danger is not simply misinformation. It is the erosion of shared reality.

    A society that cannot agree on basic facts cannot govern itself. A population trained to react rather than reflect is easy to manipulate—by foreign states, domestic demagogues, or algorithmic systems optimized for profit.

    Higher education sits at the center of this crisis. If universities cannot reclaim their role as defenders of intellectual rigor and civic responsibility, they risk becoming credential factories feeding a cognitively compromised workforce.

    Toward Intellectual Self-Defense

    Reversing the poisoning of the American mind will require more than fact-checking or content moderation. It demands structural change:

    A recommitment to public higher education as a democratic institution, not a revenue stream.

    Robust media literacy embedded across curricula, not siloed in electives.

    Transparency and accountability for technology platforms that shape public cognition.

    Protection of academic freedom from both foreign pressure and domestic political interference.

    Relief from student debt as a prerequisite for intellectual independence.

    Cognitive sovereignty is national security. Without it, no amount of military or economic power can sustain a democratic society.

    The question is not whether the American mind has been poisoned. The question is whether the institutions charged with educating it are willing to admit their failure—and do the hard work of recovery.


    Sources

    U.S. Senate Select Committee on Intelligence, reports on Russian active measures

    National Intelligence Council, foreign influence assessments

    Department of Justice investigations into Confucius Institutes

    Shoshana Zuboff, The Age of Surveillance Capitalism

    Renée DiResta et al., research on computational propaganda

    Higher Education Inquirer reporting on student debt, academic labor, and institutional capture

    Source link

  • Tech Titans, Ideologues, and the Future of American Higher Education — 2026 Update

    Tech Titans, Ideologues, and the Future of American Higher Education — 2026 Update

    This article is an update to our June 2025 Higher Education Inquirer report, Tech Titans, Ideologues, and the Future of American Higher Education. Since that report, the landscape of higher education has evolved dramatically. New developments — the increasing influence of billionaire philanthropists like Larry Ellison, private-equity figures such as Marc Rowan, and the shocking assassination of Charlie Kirk — have intensified the pressures on traditional colleges and universities. This update examines how these forces intersect with ideology, governance, financial power, and institutional vulnerability to reshape the future of American higher education.

    American higher education is under pressure from multiple directions, including financial strain, declining enrollment, political hostility, and technological disruption. Yet perhaps the greatest challenge comes from powerful outsiders who are actively reshaping how education is perceived, delivered, and valued. Figures such as Donald Trump, Elon Musk, Peter Thiel, Sam Altman, Alex Karp, Charlie Kirk, Larry Ellison, and Marc Rowan are steering resources, ideology, and policy in ways that threaten traditional universities’ missions. Each brings a distinct ideology and strategy, but their combined influence represents an existential pressure on the system.

    Larry Ellison, the billionaire founder of Oracle, has pledged to give away nearly all his fortune and already directs hundreds of millions toward research, medicine, and education-related causes. Through the Ellison Institute of Technology, he funds overseas campuses and scholarship programs at institutions like the University of Oxford. Ellison represents a “disruptor” who does not challenge degrees outright but reshapes the allocation of educational resources toward elite, globally networked research.

    The University of Phoenix cyberbreach is more than another entry in the long list of attacks on higher education. It is the clearest evidence yet of how private equity, aging enterprise software, and institutional neglect have converged to create a catastrophic cybersecurity landscape across American colleges and universities. What happened in the summer of 2025 was not an unavoidable act of foreign aggression. It was the culmination of years of cost-cutting, inadequate oversight, and a misplaced faith in legacy vendors that no longer control their own risks.

    The story begins with the Russian-speaking Clop cyber-extortion group, one of the most sophisticated data-theft organizations operating today. In early August, Clop quietly began exploiting a previously unknown vulnerability in Oracle’s E-Business Suite, a platform widely used for payroll, procurement, student employment, vendor relations, and financial aid administration. Oracle’s EBS system, decades old and deeply embedded across higher education, was never designed for modern threat environments. As soon as Clop identified the flaw—later assigned CVE-2025-61882—the group launched a coordinated campaign that compromised dozens of major institutions before Oracle even acknowledged the problem.

    Among the most heavily affected institutions was the University of Phoenix. Attackers gained access to administrative systems and exfiltrated highly sensitive data: names, Social Security numbers, bank accounts, routing numbers, vendor records, and financial-aid-related information belonging to students, faculty, staff, and contractors. The breach took place in August, but Phoenix did not disclose the incident until November 21, and only after Clop publicly listed the university on its extortion site. Even after forced disclosure, Phoenix offered only vague assurances about “unauthorized access” and refused to provide concrete numbers or a full accounting of what had been stolen.

    Phoenix was not alone. Harvard University confirmed that Clop had stolen more than a terabyte of data from its Oracle systems. Dartmouth College acknowledged that personal and financial information for more than a thousand individuals had been accessed, though the total is almost certainly much higher. At the University of Pennsylvania, administrators said only that unauthorized access had occurred, declining to detail the scale. What links these incidents is not prestige, geography, or mission. It is dependency on Oracle’s aging administrative software and a sector-wide failure to adapt to a threat environment dominated by globally coordinated cybercrime operations.

    Marc Rowan, co-founder and CEO of Apollo Global Management, has leveraged private-equity wealth to influence higher education governance. He gave $50 million to Penn’s Wharton School, funding faculty and research initiatives and has recently pushed alumni to withhold donations over issues of campus policy and antisemitism. Rowan also helped shape the Trump administration’s Compact for Academic Excellence, linking federal funding to compliance with ideologically driven standards. He exemplifies how private wealth can steer university governance and policy, reshaping priorities on a national scale. Together, Ellison and Rowan illustrate the twin dynamics of power and influence destabilizing higher education: immense private wealth, and the ambition to reshape institutions according to their own vision.

    With these powerful outsiders shaping the landscape, traditional universities increasingly face pressures to prioritize elite, donor-driven projects over broad public missions. Private funding favors high-prestige initiatives over public-access education, and large contributors can dictate leadership and policy directions. University priorities shift toward profitable or ideologically aligned projects, creating a two-tier system in which elite, insulated institutions grow while public universities struggle to compete, widening disparities in access and quality.

    The stakes of this upheaval have become tragically tangible. The assassination of Charlie Kirk in 2025 was a horrific reminder that conflicts over ideology, money, and influence are not abstract. Violence against public figures engaged in higher education policy and advocacy underscores the intensity of polarization and the human costs of these struggles. Such events cast a shadow over campuses, donor boards, and political advocacy alike, highlighting that the battle over the future of education is contested not only in boardrooms and legislatures but in life and death.

    Students face shrinking access to affordable, publicly supported higher education, particularly those without means or connections to elite institutions. Faculty may encounter restrictions on academic freedom and institutional autonomy, as donor preferences and political pressures increasingly shape hiring, curriculum, and governance. Society risks losing the traditional public mission of universities — fostering critical thinking, civic engagement, and broad social mobility — as education becomes more commodified, prioritizing elite outcomes over the public good.

    Building on our June 2025 report, this update underscores the accelerating influence of tech titans, ideologues, and billionaire philanthropists. Figures such as Ellison and Rowan are reshaping not just funding streams but governance structures, while the assassination of Charlie Kirk painfully illustrates the human stakes involved. Traditional colleges face a stark choice: maintain their public mission — democratic access, critical inquiry, and civic purpose — or retreat into survival mode, prioritizing donor dollars, corporate partnerships, and prestige. The pressures highlighted in June are not only continuing but intensifying, and the consequences — for students, faculty, and society — remain profound.


    Sources

    Fortune: Larry Ellison pledges nearly all fortune (fortune.com)

    Times Higher Education: Ellison funds Oxford scholars (timeshighereducation.com)

    Almanac UPenn: Rowan gift to Wharton (almanac.upenn.edu)

    Inquirer: Rowan donor pressure at Penn (inquirer.com)

    Inquirer: Rowan and Trump’s Compact (inquirer.com)

    Higher Education Inquirer original article (highereducationinquirer.org)

    Source link

  • Who gets to decide what counts as knowledge? Big tech, AI, and the future of epistemic agency in higher education

    Who gets to decide what counts as knowledge? Big tech, AI, and the future of epistemic agency in higher education

    by Mehreen Ashraf, Eimear Nolan, Manual F Ramirez, Gazi Islam and Dirk Lindebaum

    Walk into almost any university today, and you can be sure to encounter the topic of AI and how it affects higher education (HE). AI applications, especially large language models (LLM), have become part of everyday academic life, being used for drafting outlines, summarising readings, and even helping students to ‘think’. For some, the emergence of LLMs is a revolution that makes learning more efficient and accessible. For others, it signals something far more unsettling: a shift in how and by whom knowledge is controlled. This latter point is the focus of our new article published in Organization Studies.

    At the heart of our article is a shift in what is referred to epistemic (or knowledge) governance: the way in which knowledge is created, organised, and legitimised in HE. In plain terms, epistemic governance is about who gets to decide what counts as credible, whose voices are heard, and how the rules of knowing are set. Universities have historically been central to epistemic governance through peer review, academic freedom, teaching, and the public mission of scholarship. But as AI tools become deeply embedded in teaching and research, those rules are being rewritten not by educators or policymakers, but by the companies that own the technology.

    From epistemic agents to epistemic consumers

    Universities, academics, and students have traditionally been epistemic agents: active producers and interpreters of knowledge. They ask questions, test ideas, and challenge assumptions. But when we rely on AI systems to generate or validate content, we risk shifting from being agents of knowledge to consumers of knowledge. Technology takes on the heavy cognitive work: it finds sources, summarises arguments, and even produces prose that sounds academic. However, this efficiency comes at the cost of profound changes in the nature of intellectual work.

    Students who rely on AI to tidy up their essays, or generate references, will learn less about the process of critically evaluating sources, connecting ideas and constructing arguments, which are essential for reasoning through complex problems. Academics who let AI draft research sections, or feed decision letters and reviewer reports into AI with the request that AI produces a ‘revision strategy’, might save time but lose the slow, reflective process that leads to original thought, while undercutting their own agency in the process. And institutions that embed AI into learning systems hand part of their epistemic governance – their authority to define what knowledge is and how it is judged – to private corporations.

    This is not about individual laziness; it is structural. As Shoshana Zuboff argued in The age of surveillance capitalism, digital infrastructures do not just collect information, they reorganise how we value and act upon it. When universities become dependent on tools owned by big tech, they enter an ecosystem where the incentives are commercial, not educational.

    Big tech and the politics of knowing

    The idea that universities might lose control of knowledge sounds abstract, but it is already visible. Jisc’s 2024 framework on AI in tertiary education warns that institutions must not ‘outsource their intellectual labour to unaccountable systems,’ yet that outsourcing is happening quietly. Many UK universities, including the University of Oxford, have signed up to corporate AI platforms to be used by staff and students alike. This, in turn, facilitates the collection of data on learning behaviours that can be fed back into proprietary models.

    This data loop gives big tech enormous influence over what is known and how it is known. A company’s algorithm can shape how research is accessed, which papers surface first, or which ‘learning outcomes’ appear most efficient to achieve. That’s epistemic governance in action: the invisible scaffolding that structures knowledge behind the scenes. At the same time, it is easy to see why AI technologies appeal to universities under pressure. AI tools promise speed, standardisation, lower costs, and measurable performance, all seductive in a sector struggling with staff shortages and audit culture. But those same features risk hollowing out the human side of scholarship: interpretation, dissent, and moral reasoning. The risk is not that AI will replace academics but that it will change them, turning universities from communities of inquiry into systems of verification.

    The Humboldtian ideal and why it is still relevant

    The modern research university was shaped by the 19th-century thinker Wilhelm von Humboldt, who imagined higher education as a public good, a space where teaching and research were united in the pursuit of understanding. The goal was not efficiency: it was freedom. Freedom to think, to question, to fail, and to imagine differently.

    That ideal has never been perfectly achieved, but it remains a vital counterweight to market-driven logics that render AI a natural way forward in HE. When HE serves as a place of critical inquiry, it nourishes democracy itself. When it becomes a service industry optimised by algorithms, it risks producing what Žižek once called ‘humans who talk like chatbots’: fluent, but shallow.

    The drift toward organised immaturity

    Scholars like Andreas Scherer and colleagues describe this shift as organised immaturity: a condition where sociotechnical systems prompt us to stop thinking for ourselves. While AI tools appear to liberate us from labour, what is happening is that they are actually narrowing the space for judgment and doubt.

    In HE, that immaturity shows up when students skip the reading because ‘ChatGPT can summarise it’, or when lecturers rely on AI slides rather than designing lessons for their own cohort. Each act seems harmless; but collectively, they erode our epistemic agency. The more we delegate cognition to systems optimised for efficiency, the less we cultivate the messy, reflective habits that sustain democratic thinking. Immanuel Kant once defined immaturity as ‘the inability to use one’s understanding without guidance from another.’ In the age of AI, that ‘other’ may well be an algorithm trained on millions of data points, but answerable to no one.

    Reclaiming epistemic agency

    So how can higher education reclaim its epistemic agency? The answer lies not only in rejecting AI but also in rethinking our possible relationships with it. Universities need to treat generative tools as objects of inquiry, not an invisible infrastructure. That means embedding critical digital literacy across curricula: not simply training students to use AI responsibly, but teaching them to question how it works, whose knowledge it privileges, and whose it leaves out.

    In classrooms, educators could experiment with comparative exercises: have students write an essay on their own, then analyse an AI version of the same task. What’s missing? What assumptions are built in? How were students changed when the AI wrote the essay for them and when they wrote them themselves? As the Russell Group’s 2024 AI principles note, ‘critical engagement must remain at the heart of learning.’

    In research, academics too must realise that their unique perspectives, disciplinary judgement, and interpretive voices matter, perhaps now more than ever, in a system where AI’s homogenisation of knowledge looms. We need to understand that the more we subscribe to values of optimisation and efficiency as preferred ways of doing academic work, the more natural the penetration of AI into HE will unfold.

    Institutionally, universities might consider building open, transparent AI systems through consortia, rather than depending entirely on proprietary tools. This isn’t just about ethics; it’s about governance and ensuring that epistemic authority remains a public, democratic responsibility.

    Why this matters to you

    Epistemic governance and epistemic agency may sound like abstract academic terms, but they refer to something fundamental: the ability of societies and citizens (not just ‘workers’) to think for themselves when/if universities lose control over how knowledge is created, validated and shared. When that happens, we risk not just changing education but weakening democracy. As journalist George Monbiot recently wrote, ‘you cannot speak truth to power if power controls your words.’ The same is true for HE. We cannot speak truth to power if power now writes our essays, marks our assignments, and curates our reading lists.

    Mehreen Ashraf is an Assistant Professor at Cardiff Business School, Cardiff University, United Kingdom.

    Eimear Nolan is an Associate Professor in International Business at Trinity Business School, Trinity College Dublin, Ireland.

    Manuel F Ramirez is Lecturer in Organisation Studies at the University of Liverpool Management School, UK.

    Gazi Islam is Professor of People, Organizations and Society at Grenoble Ecole de Management, France.

    Dirk Lindebaum is Professor of Management and Organisation at the School of Management, University of Bath.

    Author: SRHE News Blog

    An international learned society, concerned with supporting research and researchers into Higher Education

    Source link