Students have little opportunity to practice agency when an LMS tracks their assignments, they’re not encouraged to explore different majors and colleges shrink general education requirements, according to writer and educator John Warner.
In the latest episode of The Key, Inside Higher Ed’s news and analysis podcast, Warner tells IHE’s editor in chief, Sara Custer, that colleges should refocus on teaching students how to learn and grow.
“Agency writ large is the thing we need to survive as people … but it’s also a fundamental part of learning, particularly writing.”
Warner argues that with the arrival of AI, helping students develop agency is even more of an imperative for higher education institutions.
“AI is a homework machine … Our response cannot be ‘you’re just going to make this thing using AI now,’” Warner said. “More importantly than this is not learning anything, it is a failure to confront [the question]: What do we, as humans, do now with this technology?”
Warner also shares what he’s learned from consulting and speaking about teaching and AI at campuses across the country. Ultimately, he says, faculty can work with AI in a way that still aligns with their institutional values.
“I also feel that the sense of hopefulness…is essential. Without it there isn’t much concrete to hold on to. At least I see that this kind of outlandish dreaming and utopian imagining is, in a certain way, also reflected in the actions taken toward it. Or that nothing moves anywhere if you don’t envision it.”
This quotation illustrates how a group of higher education students understood outlandish dreaming as essential for social and political change. It also connects directly to our main argument: that political imagination is a form of political agency. The shared recognition in the quote, that imagining itself matters, shows how utopianising is potentially already a political act.
We explored higher education students’ political imagination as part of an ongoing project, Breadline Utopias. We invited students from social sciences, education, social services, and healthcare to participate in workshops on alternative futures of food assistance. These students are likely to work in roles connected to food assistance (directly and indirectly), which makes their perspectives especially relevant. Altogether, 86 students participated in seven workshops across four higher education institutions. Our analysis draws on 21 small-group discussions in which students discussed the current state of food assistance and imagined alternative futures for it.
Food assistance in Finland is a contested practice that sits between Nordic welfare state ideals and charity-based logics. Over the last 30 years, it has become an institutionalised part of Finnish society. Because food assistance is intertwined with broader societal issues, such as poverty, justice, food, and social policy, it offers a rich site for examining how students navigate and relate to political questions.
Constraints, conditions and cracks of political imagination
Our research shows that political imagination is shaped by a dynamic between three elements: constraints, conditions and cracks. This dynamic suggests that political imagination is a fragile yet meaningful mode of political agency.
Constraints block imagination
Constraints are moments where students’ political imagination becomes blocked. They act as barriers for imagination – like the dead-end of a labyrinth. Constraints relate to who is seen as able to imagine and what legitimate imagining should look like. For example, some students felt that they don’t have the “right” knowledge or experience to envision alternative futures for food assistance. As one student put it:
“Since I’m not an expert on the issue at all, it’s [difficult] to predict it or to develop it in general, because I’ve never been part of it myself.”
This lack of perceived expertise reduces students’ sense of agency, leading them to position themselves outside the political sphere. Another constraint, solution-orientation, also blocks political imagination. With this mindset, societal questions are approached as technical problems to be fixed, reflecting an internalised sense of what kinds of discussions are acceptable and meaningful.
Conditions limit imagination
Conditions shape what students imagine by acting as boundaries that limit how far imagination can extend toward contesting taken-for-granted logics of the topic in question. We identified three conditions: the ethos of helping, resource efficiency, and market-drivenness. All three stem from the charity economy frame, which means that students’ visions focused on improving helping practices, maximising existing resources (volunteers, surplus food and money) or finding market-compatible solutions. Within these boundaries, both food assistance and poverty are depoliticised, as imagination remains anchored to the existing system rather than questioning or transforming it.
Cracks reveal fractures and open up alternatives
Cracks are moments that reveal fractures in the current system and in students’ experiential worlds: openings through which alternative futures become visible or thinkable. Cracks took different forms in students’ discussions. At times, conversations drifted onto sidetracks that disrupted linear and solution-oriented talk, allowing new questions and possibilities to surface. In other moments, students paused to consider the perspectives of food assistance recipients. Such reflexive stumbles invited empathy, vulnerability and relational awareness, opening space for politicisation. There were also critical pushes in which seemingly neutral features of the system were recognised as politically constructed. These instances opened cracks through which alternative social and institutional arrangements were imagined. Finally, as the quotation in the beginning shows, outlandish dreams are moments when students collectively affirm that hope and utopias themselves matter, even though the process of imagining alternatives is difficult and involves constraints and conditions.
Political imagination as a democratic capacity
The dynamic between constraints, cracks and conditions highlights that ‘the political’ in political imagination appears unevenly – sometimes blocked, sometimes strengthened, sometimes catalysed. Thus, political imagination is not a linear process but unfolds within the dynamics of these three elements (see Picture 1). Despite its fragility, we suggest that political imagination is a meaningful mode of political agency: it is visible in how students critique existing conditions, propose alternative futures and reposition themselves in relation to the political. Political imagination makes the openness of the future visible. It is precisely this openness that gives imagination its value as a form of agency: it keeps the alternative futures alive.
Picture 1. Constraints, conditions and cracks of political imagination
Recognising these everyday acts of imagining as political practices expands our understanding of what counts as students’ political agency. In doing so, it points to political imagination as a capacity for democratic citizenship, as the ability to envision alternatives is integral to democratic life. This capacity is all the more important given reports of young people’s reduced hope for the future and the global challenges we are facing (e.g., polarisation and climate change). Cultivating such capacity should be part of higher education’s democratic mission. In other words, outlandish dreaming is not trivial or escapism – it is a fragile, necessary political act.
Dr Anu Lainio works as a Postdoctoral researcher at the University of Eastern Finland. Her research focuses on higher education students’ political imagination and utopias, as part of the project ‘Utopias from the breadline’. Before her current position, she completed her doctoral research at the University of Surrey, UK, where she explored higher education students’ identities and media representations across different European countries. Her research interests include political imagination; media and higher education; and student identities, agency, and (in)equalities in the context of marketised higher education and a changing welfare state.
Dr Anna Sofia Salonen is a University Lecturer in Practical Theology at the University of Eastern Finland. Her research combines sociological and theological perspectives to explore issues related to food, religion, ethics, inequality and the food system. She is the principal investigator of the research project “Utopias from the breadline” (Kone Foundation 2023-2025) and leads a work package on nonreligion in research project “Religion, Meaning and Masculinity” (Research Council of Finland 2024-2028).
International Ph.D. students and postdoctoral scholars drive a large share of the United States’ scientific research, innovation and global competitiveness. Yet these visa holders often face systemic barriers that limit their ability to build independent, fulfilling careers. Restricted access to fellowships and immigration constraints can stifle career agency, forcing the nation’s institutions to lose out on the very global talent they train to fuel discovery and progress.
Drawing from insights in our recently released book, Thriving as an International Scientist (University of California Press), this essay outlines key challenges that international scientists face and concrete steps universities, employers and scientific societies can take to enable their dynamic career success.
Systemic Barriers to Career Independence
The U.S. depends on international talent to sustain its scientific enterprise. In 2023, nearly 41 percent of Ph.D. students and 58 percent of postdocs in U.S. universities were visa holders, and international scholars made up 34 percent of Ph.D. graduates in 2022, an increase from just 11 percent in1977.
While U.S. universities still lead globally in training and employing a robust international scientific workforce, the recent anti-immigrant climate in the U.S. and growing global competition for STEM talent threatens this long-standing advantage. Two issues impacting international scientists stand out as particularly urgent: limited access to independent research fellowships and visa policies that restrict career flexibility.
Fewer fellowships lead to reduced agency. International scientists have access to fewer fellowships for supporting their independent research ideas. Data on primary sources of STEM doctoral student funding indicates 17 percent of international Ph.D. students relied primarily on fellowships, scholarships or dissertation grants in 2022, compared to 29 percent of their U.S. citizen and permanent resident peers. More than half of international Ph.D. students in science and engineering across U.S. universities relied on faculty-directed funding, through research assistantships, compared with just a third of domestic students (citizens and permanent residents).
This reliance limits their autonomy to define research directions or confidently pursue professional development and internship opportunities. As a result, only 22 percent of international Ph.D. graduates from U.S. universities committed to academic careers (excluding postdocs) in 2022, in part due to a significant lack in independent funding experience—a key qualification for faculty roles.
Visa constraints on career mobility. Visa regulations often confine international scientists to narrowly defined “research-related” roles in academia or industry. This restriction effectively locks them out of emerging career paths in the business of science, science policy, science communication, entrepreneurship, university administration and nonprofit leadership until they obtain permanent residency.
They are also disproportionately vulnerable to economic downturns or layoffs. Work visas typically allow a 60-day grace period to secure new employment and maintain legal immigration status, putting tremendous pressure on individuals and families. With rising costs and uncertainty surrounding H-1B work visas, employers may also hesitate to hire international scientists, compounding career instability for this essential segment of the STEM workforce.
Expand access to independent funding. Increase visibility of funding through databases such as Pivot and create matching fellowship opportunities from institutional, corporate and philanthropic sources that are open to noncitizens.
Track and leverage alumni outcomes. Analyze Ph.D. and postdoctoral career outcomes by citizenship and location in order to strengthen alumni mentorship and global networks for trainees.
Specialized professional development for Ph.Ds. Provide training in in-demand and holistic skills to address wicked problems, advance emerging technologies and foster knowledge of a range of careers for STEM Ph.D. holders.
Integrate career development into curricula. Embed professional development and career preparation within graduate and postdoctoral programs, rather than limiting them to extracurricular workshops, in order to encourage international scientists to participate.
Foster equitable access to internships. Simplify and expand opportunities for experiential learning by using the Curricular Practical Training path. Departments can offer internship courses through which students can use CPT or encourage them to incorporate insights from their internships into the dissertation. Creating more practical opportunities for students to broadly apply their research skills enables their success in getting work visas for diverse careers.
At Princeton University, one of us developed a specialized professional development series for international graduate students integrating creative design, intentional career planning, immigration literacy and strategies for global careers. This approach helps international scholars build resilience, community and agency in navigating complex systems and uncertain futures.
The Role of Scientific and Professional Societies
Scientific and professional societies hold powerful levers for nationwide systemic change. Through initiatives that foster advocacy, partnerships and innovation, they can amplify the impact of international scientists and shape more inclusive policies.
Diversify funding models. As scientific leaders reconsider how to continue funding STEM research including for graduate and postdoctoral programs at scale in the U.S. through convenings (e.g., by NASEM and UIDP), public-private-philanthropic partnerships must intentionally include considerations by and for international graduate students and postdocs in their planning and implementation.
Require professional development. Foundations and philanthropic funders can make career and professional development a standard component of fellowships and sponsored research grants, following the precedents set by the National Science Foundation and the National Institutes of Health.
Mobilize advocacy through data. Public-facing dashboards such as the NAFSA International Student Economic Value Tool and OPT Observatory from the Institute for Progress, demonstrate the economic and intellectual value of international scientists. These are powerful tools for storytelling, advocacy and policy change.
Encourage immigration innovation. Beyond ongoing legislative efforts like the bipartisan Keep STEM Talent Act aiming to support the U.S. STEM workforce, the philanthropic sector can also pilot creative solutions. For instance, Renaissance Philanthropy’s Talent Mobility Fund raises awareness of underutilized immigration pathways such as O-1 and J-1 visas, diversifying routes available for STEM researchers.
Employer Responsibility
Employers across all sectors—universities, for-profit industries and nonprofit organizations—have a shared responsibility to create transparent, informed hiring practices for visa holders. Too often, candidates are left to initiate uncomfortable sponsorship discussions during job interviews. Instead, hiring managers should proactively coordinate with human resources and legal teams before posting positions to determine sponsorship possibilities, costs and timelines. Even small changes, such as explicitly noting “visa sponsorship available” (or not available) in job descriptions, can make a significant difference in promoting fairness and equity in hiring.
Moving Forward: Shared Responsibility for Systemic Change
The ability of international scientists to thrive is not just a matter of ethics and fairness—it is a strategic imperative for the future of American science and innovation. Universities, scientific societies, funders and employers have a shared responsibility to participate in removing systemic barriers and expanding opportunities for international scientists in a variety of careers.
While large-scale policy change may take time, meaningful progress is possible through small, immediate steps:
Expanding access to independent funding and internships,
Increasing transparency through data, and
Fostering mentorship and advocacy networks.
By enabling international scientists to build dynamic, independent careers, we strengthen not only their futures but also the vitality and global leadership of the U.S. research enterprise.
Sonali Majumdar (she/her) is assistant dean for professional development in Princeton University’s Graduate School and author of Thriving as an International Scientist: Professional Development for Global STEM Citizens (October 2025, University of California Press). She is a member of the Graduate Career Consortium—an organization providing an international voice for graduate-level career and professional development leaders.
Adriana Bankston (she/her) is a strong advocate for the research enterprise andsupporting the next-generation STEM workforce and a former AAAS/ASGCT Congressional Policy Fellow in the U.S. House of Representatives. She contributed to a chapter inThriving as an International Scientiston systemic reforms and policy change in academia.
by Mehreen Ashraf, Eimear Nolan, Manual F Ramirez, Gazi Islam and Dirk Lindebaum
Walk into almost any university today, and you can be sure to encounter the topic of AI and how it affects higher education (HE). AI applications, especially large language models (LLM), have become part of everyday academic life, being used for drafting outlines, summarising readings, and even helping students to ‘think’. For some, the emergence of LLMs is a revolution that makes learning more efficient and accessible. For others, it signals something far more unsettling: a shift in how and by whom knowledge is controlled. This latter point is the focus of our new article published in Organization Studies.
At the heart of our article is a shift in what is referred to epistemic (or knowledge) governance: the way in which knowledge is created, organised, and legitimised in HE. In plain terms, epistemic governance is about who gets to decide what counts as credible, whose voices are heard, and how the rules of knowing are set. Universities have historically been central to epistemic governance through peer review, academic freedom, teaching, and the public mission of scholarship. But as AI tools become deeply embedded in teaching and research, those rules are being rewritten not by educators or policymakers, but by the companies that own the technology.
From epistemic agents to epistemic consumers
Universities, academics, and students have traditionally been epistemic agents: active producers and interpreters of knowledge. They ask questions, test ideas, and challenge assumptions. But when we rely on AI systems to generate or validate content, we risk shifting from being agents of knowledge to consumers of knowledge. Technology takes on the heavy cognitive work: it finds sources, summarises arguments, and even produces prose that sounds academic. However, this efficiency comes at the cost of profound changes in the nature of intellectual work.
Students who rely on AI to tidy up their essays, or generate references, will learn less about the process of critically evaluating sources, connecting ideas and constructing arguments, which are essential for reasoning through complex problems. Academics who let AI draft research sections, or feed decision letters and reviewer reports into AI with the request that AI produces a ‘revision strategy’, might save time but lose the slow, reflective process that leads to original thought, while undercutting their own agency in the process. And institutions that embed AI into learning systems hand part of their epistemic governance – their authority to define what knowledge is and how it is judged – to private corporations.
This is not about individual laziness; it is structural. As Shoshana Zuboff argued in The age of surveillance capitalism, digital infrastructures do not just collect information, they reorganise how we value and act upon it. When universities become dependent on tools owned by big tech, they enter an ecosystem where the incentives are commercial, not educational.
Big tech and the politics of knowing
The idea that universities might lose control of knowledge sounds abstract, but it is already visible. Jisc’s 2024 framework on AI in tertiary education warns that institutions must not ‘outsource their intellectual labour to unaccountable systems,’ yet that outsourcing is happening quietly. Many UK universities, including the University of Oxford, have signed up to corporate AI platforms to be used by staff and students alike. This, in turn, facilitates the collection of data on learning behaviours that can be fed back into proprietary models.
This data loop gives big tech enormous influence over what is known and how it is known. A company’s algorithm can shape how research is accessed, which papers surface first, or which ‘learning outcomes’ appear most efficient to achieve. That’s epistemic governance in action: the invisible scaffolding that structures knowledge behind the scenes. At the same time, it is easy to see why AI technologies appeal to universities under pressure. AI tools promise speed, standardisation, lower costs, and measurable performance, all seductive in a sector struggling with staff shortages and audit culture. But those same features risk hollowing out the human side of scholarship: interpretation, dissent, and moral reasoning. The risk is not that AI will replace academics but that it will change them, turning universities from communities of inquiry into systems of verification.
The Humboldtian ideal and why it is still relevant
The modern research university was shaped by the 19th-century thinker Wilhelm von Humboldt, who imagined higher education as a public good, a space where teaching and research were united in the pursuit of understanding. The goal was not efficiency: it was freedom. Freedom to think, to question, to fail, and to imagine differently.
That ideal has never been perfectly achieved, but it remains a vital counterweight to market-driven logics that render AI a natural way forward in HE. When HE serves as a place of critical inquiry, it nourishes democracy itself. When it becomes a service industry optimised by algorithms, it risks producing what Žižek once called ‘humans who talk like chatbots’: fluent, but shallow.
The drift toward organised immaturity
Scholars like Andreas Scherer and colleagues describe this shift as organised immaturity: a condition where sociotechnical systems prompt us to stop thinking for ourselves. While AI tools appear to liberate us from labour, what is happening is that they are actually narrowing the space for judgment and doubt.
In HE, that immaturity shows up when students skip the reading because ‘ChatGPT can summarise it’, or when lecturers rely on AI slides rather than designing lessons for their own cohort. Each act seems harmless; but collectively, they erode our epistemic agency. The more we delegate cognition to systems optimised for efficiency, the less we cultivate the messy, reflective habits that sustain democratic thinking. Immanuel Kant once defined immaturity as ‘the inability to use one’s understanding without guidance from another.’ In the age of AI, that ‘other’ may well be an algorithm trained on millions of data points, but answerable to no one.
Reclaiming epistemic agency
So how can higher education reclaim its epistemic agency? The answer lies not only in rejecting AI but also in rethinking our possible relationships with it. Universities need to treat generative tools as objects of inquiry, not an invisible infrastructure. That means embedding critical digital literacy across curricula: not simply training students to use AI responsibly, but teaching them to question how it works, whose knowledge it privileges, and whose it leaves out.
In classrooms, educators could experiment with comparative exercises: have students write an essay on their own, then analyse an AI version of the same task. What’s missing? What assumptions are built in? How were students changed when the AI wrote the essay for them and when they wrote them themselves? As the Russell Group’s 2024 AI principles note, ‘critical engagement must remain at the heart of learning.’
In research, academics too must realise that their unique perspectives, disciplinary judgement, and interpretive voices matter, perhaps now more than ever, in a system where AI’s homogenisation of knowledge looms. We need to understand that the more we subscribe to values of optimisation and efficiency as preferred ways of doing academic work, the more natural the penetration of AI into HE will unfold.
Institutionally, universities might consider building open, transparent AI systems through consortia, rather than depending entirely on proprietary tools. This isn’t just about ethics; it’s about governance and ensuring that epistemic authority remains a public, democratic responsibility.
Why this matters to you
Epistemic governance and epistemic agency may sound like abstract academic terms, but they refer to something fundamental: the ability of societies and citizens (not just ‘workers’) to think for themselves when/if universities lose control over how knowledge is created, validated and shared. When that happens, we risk not just changing education but weakening democracy. As journalist George Monbiot recently wrote, ‘you cannot speak truth to power if power controls your words.’ The same is true for HE. We cannot speak truth to power if power now writes our essays, marks our assignments, and curates our reading lists.
Mehreen Ashraf is an Assistant Professor at Cardiff Business School, Cardiff University, United Kingdom.
Eimear Nolan is an Associate Professor in International Business at Trinity Business School, Trinity College Dublin, Ireland.
Manuel F Ramirez is Lecturer in Organisation Studies at the University of Liverpool Management School, UK.
Gazi Islam is Professor of People, Organizations and Society at Grenoble Ecole de Management, France.
Dirk Lindebaum is Professor of Management and Organisation at the School of Management, University of Bath.
The U.S. Department of Education’s Office for Civil Rights has determined that George Mason University violated federal civil rights law by using race as a factor in hiring and promotion decisions, the agency announced on Friday.
The finding concluded that GMU violated Title VI of the Civil Rights Act of 1964, which prohibits discrimination based on race, color, and national origin in federally funded education programs. The university now has 10 days to accept a proposed resolution agreement or risk losing federal funding.
Acting Assistant Secretary for Civil Rights Craig Trainor said President Gregory Washington led “a university-wide campaign to implement unlawful DEI policies that intentionally discriminate on the basis of race.”
“You can’t make this up,” Trainor said in a statement, noting that Washington had previously called for removing “racist vestiges” from campus in 2020.
The investigation, launched in July 2025, stemmed from complaints filed by multiple GMU professors who alleged the university adopted preferential treatment policies for faculty from “underrepresented groups” between 2020 and the present.
Federal investigators said that they found several problematic practices. As recently as fall 2024, they argue that the university’s website stated it “may choose to waive the competitive search process when there is an opportunity to hire a candidate who strategically advances the institutional commitment to diversity and inclusion.”
The current Faculty Handbook also requires approval from the “Office of Access, Compliance, and Community” – previously called the “Office of Diversity, Equity, and Inclusion” until GMU renamed it in March 2025 – before extending job offers.
One high-level administrator told investigators that Washington “created an atmosphere of surveillance” regarding hiring decisions related to diversity objectives.
Under the proposed resolution agreement, Washington must personally issue a statement and apology to the university community, acknowledging the discriminatory practices. The university must also revise hiring policies, conduct annual training, and remove any provisions encouraging racial preferences.
GMU must post the presidential statement prominently on its website and remove any contradictory materials. The university would also be required to maintain compliance records and designate a coordinator to work with federal officials.
George Mason University, located in Fairfax, Virginia, enrolls approximately 39,000 students and receives federal funding that could be at risk if the violations are not resolved.
George Mason officials said that they are reviewing the specific resolution steps proposed by the Department of Education.
“We will continue to respond fully and cooperatively to all inquiries from the Department of Education, the Department of Justice and the U.S. House of Representatives and evaluate the evidence that comes to light,” the university said in a statement. “Our sole focus is our fiduciary duty to serve the best interests of the University and the people of the Commonwealth of Virginia.”
This audio is auto-generated. Please let us know if you have feedback.
Dive Brief:
The U.S. Department of Education sent a “Dear Colleague” letter on Tuesday to district and state leaders encouraging and guiding them on how to integrate artificial intelligence in schools through existing federal grants.
The letter signed by U.S. Education Secretary Linda McMahon said that grantees may use federal funds to use AI to enhance high-quality curriculum tools, high-impact tutoring, and college and career pathway advising.
The department briefly also outlined its principles for responsible AI use in schools. Those principles affirmed that AI K-12 initiatives should be educator-led, ethical, accessible for those with disabilities, transparent in the way new tools are rolled out, and in compliance with federal data privacy laws.
Dive Insight:
The department’s new AI guidance comes at a time when the future of federal oversight of ed tech and K-12 cybersecurity policies remains unclear, given that the Trump administration shuttered the Education Department’s Office of Educational Technology in March and has continued to move toward its plan to dismantle the agency.
On Monday, the Education Department also published a proposed rule in the Federal Register regarding its priorities for using discretionary grant programs to support AI use in schools. The public comment period on the regulatory proposal is open until Aug. 20.
Under the proposal, those seeking federal grant funding for AI projects in schools would need to include a focus on at least one of the following goals:
Embed AI literacy skills into classroom lessons to ultimately improve students’ educational outcomes.
Provide educators with professional development in foundational skills for computer science and AI with instruction on how to responsibly use new technologies.
Partner with states or school districts to offer high school students dual enrollment credentialing opportunities for postsecondary or industry-recognized credentials in AI.
Support and develop evidence for appropriate ways to integrate AI into education.
Use AI to support services for students with disabilities.
Tap into AI to improve teacher training and evaluation
Use AI tools to reduce time-intensive administrative tasks
Meanwhile, over 400 school district leaders sent a letter to Congress last week asking for lawmakers to restore federal leadership for K-12 cybersecurity and ed tech.
The letter, led by the Consortium for School Networking, pointed to funding cuts at the Cybersecurity and Infrastructure Security Agency that led to the discontinuation of K-12 cybersecurity programs offered through the Multi-State Information Sharing and Analysis Center. The move, they wrote, consequently took away “critical threat intelligence, incident response, and coordination services that many school systems depend on to protect against ransomware and other attacks.”
OET’s closure also left a major hole in guidance for states and districts on key issues such as responsible AI use, digital design, digital access and cybersecurity strategy, the letter said. The district leaders also called for Congress to reinstate staffing for the office.
CoSN CEO Keith Krueger said district technology leaders are increasingly worried that AI will be used for cyberattacks against schools. He added that the demand for more K-12 resources to protect schools from cybersecurity threats is “incredible.”
For instance, the Federal Communications Commission in November 2024 received $3.7 billion in requests for federal funds to help protect district networks. The applications were for a $200 million FCC cybersecurity pilot program.
But the bottom line, Krueger said, is that if the Trump administration fulfills its promise to close the Education Department, “who exactly is going to help school districts with cybersecurity, for instance, or AI?”
This audio is auto-generated. Please let us know if you have feedback.
Dive Brief:
The State University System of Florida’s governing board plans to vote Friday to approve the creation of the Commission for Public Higher Education, a new accreditor formed by the state university system and five other Southern public higher education networks.
The state’s Legislature has devoted $4 million to the Florida governing board to help with startup costs for the new accreditor, according to CPHE’s business plan. The accreditor expects the other five university systems to devote a similar level of resources to the effort.
CPHE hopes to begin accrediting six institutions by June 2026 and to become recognized by the U.S. Department of Education by June 2028. Accreditors must operate for two years before the Education Department will recognize them.
Dive Insight:
Florida Gov. Ron DeSantis announced the formation of the new accreditor for public colleges in late June, criticizing diversity, equity and inclusion standards at existing agencies and framing the effort as a way to focus more on student outcomes.
The new accreditor’s business plan echoes those aims.
“CPHE will laser-focus on student outcomes, streamline accreditation standards, focus on emerging educational models, modernize the accreditation process, maximize efficiency without sacrificing quality, and ensure no imposition of divisive ideological content on institutions,” it states.
Still, the new agency is a long way off from getting the Education Department’s recognition, which is required before its accreditation can grant colleges access to federal financial aid. The business plan notes that the Education Department usually takes at least two years to recognize a new accreditor after it submits its application, which it plans to submit in 2026.
The Education Department currently recognizes about two dozen institutional accreditors, according to a federal database.
Colleges that want to be accredited by CPHE will be able to retain their current agency while the new accreditor seeks the Education Department’s recognition, according to the business plan. Once it becomes federally recognized, colleges can make CPHE their primary accreditor and shed their other agency, if they wish.
The founding members of the new accreditor are the State University System of Florida, the Texas A&M University System, the University System of Georgia, the University of North Carolina System, the University of South Carolina system and the University of Tennessee System.
Each system will appoint someone to sit on CPHE’s board of directors, which will establish accreditation standards and policies.
The new accreditor will also create a paid Interim Review Committee, which will conduct peer reviews of colleges and make recommendations to the board of directors about accreditation actions. The committee will report to CPHE’s board of directors and include academic experts, auditors and compliance officers.
The business plan credits recent federal policy changes for making it easier for colleges to jump to new accrediting agencies.
That includes a regulatory change during President Donald Trump’s first term that removed regional restrictions on the nation’s seven major accreditors, meaning they each can now represent colleges nationwide instead of only those located in their traditional geographic territories.
The Education Department said it will decide on accreditation change requests within 30 days. If the agency doesn’t respond by that deadline, colleges will receive automatic approval unless they don’t meet the eligibility requirements.
One higher education expert has described the deadline as a “30-day rubber stamp,” arguing that it takes time and expertise to conduct such reviews. Yet the procedural changes are coming even as the Education Department attempts to shed roughly half of its staff.
Colleges will not be eligible to switch if they’ve faced accreditor sanctions within the last two years. However, they will be able to switch for a litany of other reasons, including objecting to their current accreditors’ standards.
Both Florida and North Carolina legislators have passed laws in recent years requiring public colleges to switch accreditors each cycle, which usually run between six to 10 years. The changes came after each state’s public university systems publicly spat with their accreditor, the Southern Association of Colleges and Schools Commission on Colleges.
SACSCOC accredits each college within the six founding members’ university systems. However, some institutions in Florida and Texas have begun the process of switching to new agencies, according to CPHE’s business plan.
More than 300 NIH employees criticized their director, Jay Bhattacharya, in the letter published Monday.
Jim Watson/AFP/Getty Images
Hundreds of staff at the National Institutes of Health are publicly condemning the agency’s actions in recent months, including firing thousands of workers and canceling research grants for projects that don’t align with the Trump administration’s ideologies.
In a letter sent Monday morning to Jay Bhattacharya, the Trump-appointed NIH director who gained notoriety for his criticism of the NIH’s handling of the COVID-19 pandemic, more than 300 employees from across the agency called on him to deliver on his promise to embrace dissent, which he has called “the very essence of science.”
“We are compelled to speak up when our leadership prioritizes political momentum over human safety and faithful stewardship of public resources,” states the letter, titled the Bethesda Declaration (Bethesda, Md., is the location of the NIH’s main campus) and modeled after Bhattacharya’s own Great Barrington Declaration, which condemned the NIH in 2020 for ignoring his calls to mostly cease pandemic-related precautions.
“This censorship is incompatible with academic freedom, which should not be applied selectively based on political ideology.”
In addition to accusing Bhattacharya of politicizing research, the letter published Monday also criticized the agency for “undermining” peer review, unilaterally capping indirect costs and firing NIH staff.
Bhattacharya is scheduled to appear before the Senate appropriations subcommittee today to discuss Trump’s proposal to cut $18 billion or about 40 percent from the NIH’s budget.
Department of Education[Docket No.: ED-2025-SCC-0002]
AGENCY:
Federal Student Aid (FSA), Department of Education (ED).
ACTION:
Notice.
SUMMARY:
In accordance with the Paperwork Reduction Act (PRA) of 1995, the Department is proposing a revision of a currently approved information collection request (ICR).
DATES:
Interested persons are invited to submit comments on or before June 18, 2025.
ADDRESSES:
Written comments and recommendations for proposed information collection requests should be submitted within 30 days of publication of this notice. Click on this link www.reginfo.gov/public/do/PRAMain to access the site. Find this information collection request (ICR) by selecting “Department of Education” under “Currently Under Review,” then check the “Only Show ICR for Public Comment” checkbox. Reginfo.gov provides two links to view documents related to this information collection request. Information collection forms and instructions may be found by clicking on the “View Information Collection (IC) List” link. Supporting statements and other supporting documentation may be found by clicking on the “View Supporting Statement and Other Documents” link.
FOR FURTHER INFORMATION CONTACT:
For specific questions related to collection activities, please contact Carolyn Rose, 202-453-5967.
SUPPLEMENTARY INFORMATION:
The Department is especially interested in public comment addressing the following issues: (1) is this collection necessary to the proper functions of the Department; (2) will this information be processed and used in a timely manner; (3) is the estimate of burden accurate; (4) how might the Department enhance the quality, utility, and clarity of the information to be collected; and (5) how might the Department minimize the burden of this collection on the respondents, including through the use of information technology. Please note that written comments received in response to this notice will be considered public records.
Title of Collection: Borrower Defense to Loan Repayment Universal Forms.
OMB Control Number: 1845-0163.
Type of Review: A revision of a currently approved ICR.
Respondents/Affected Public: Individuals and Households.
Total Estimated Number of Annual Responses: 83,750.
Total Estimated Number of Annual Burden Hours: 217,750.
Abstract: On April 4, 2024 the U.S. Court of Appeals of the Fifth Circuit granted a preliminary injunction against 34 CFR 685.400 et seq. (“2023 Regulation”) enjoining the rule and postponing the effective date of the regular pending final judgment in the case. The current Borrower Defense to Repayment application and related Request for Reconsideration are drafted to conform to the enjoined provisions of the 2023 Regulation. This request is to revise the currently approved information collection 1845-0163 to comply with the regulatory requirements of the borrower defense regulations that are still in effect, 34 CFR 685.206(e) (“2020 Regulation”), 34 CFR 685.222 (“2016 Regulation”), and 34 CFR 685.206(c) (“1995 Regulation”) (together, the “current regulations”). These regulatory requirements are distinct from the 2023 Regulation’s provisions. The revision is part of contingency planning in case the 2023 Regulation is permanently struck down. The Department of Education (“the Department”) is attaching an updated Borrower Defense Application and application for Request for Reconsideration. The forms will be available in paper and electronic forms on studentaid.gov and will provide borrowers with an easily accessible and clear method to provide the information necessary for the Department to review and process claim applications. Also, under the current regulations, the Department will no longer require a group application nor group reconsideration application.
Dated: May 13, 2025.
Brian Fu,
Program and Management Analyst, Office of Planning, Evaluation and Policy Development.
One in three chief technology and information officers says their institution is significantly more reliant on artificial intelligence than it was even last year, according to the Inside Higher Ed/Hanover Research 2025 Survey of Campus Chief Technology/Information Officers, published today. Yet those same campus tech leaders also indicate their institutions are struggling with AI governance at a time of upheaval for higher education.
The fragmentation in campus technology policies and approaches is only adding “another layer of uncertainty” to the general chaos, said Chris van der Kaay, a one-time college CIO and current higher education consultant specializing in AI policy.
Some additional disconnects: Only a third of campus tech leaders say investing in generative artificial intelligence is a high or essential priority for their institution, and just 19 percent say higher education is adeptly handling the rise of AI.
This, combined with technology companies’ growing influence in society and the sector, raises big questions about college and university agency in defining how AI will shape their futures.
Maintaining Control
“Colleges and universities have to be in control of how AI is being used unless they want the private sector dictating how it will be used at their institutions,” van der Kaay said. “If they want to maintain control and be at the forefront of change, helping institutions adapt and supporting staff and faculty needs—they have to make it a top priority.”
More on the Survey
On Wednesday, June 18, at 2 p.m. Eastern, Inside Higher Ed will present a webcast to discuss the results of the survey. Please register here.
This independent Inside Higher Ed Survey of Campus Chief Technology/Information Officers was supported in part by Softdocs, Grammarly, Jenzabar and T-Mobile for Education.
Inside Higher Ed’s 2025 Survey of Campus Chief Technology/Information Officers was conducted by Hanover Research. The survey included 108 CTOs from public and private institutions, two-year and four-year, for a margin of error of 9 percent. A copy of the free report can be downloaded here.
Between February and March of this year, Inside Higher Ed and Hanover Research sent surveys to 2,197 college and university CTOs. Of the 108 who submitted responses, providing a valuable snapshot of this terrain, 59 percent serve on an executive cabinet or council at their institution. But close to half believe their college isn’t fully leveraging their knowledge and insights to inform strategic decisions and planning involving technology.
And it’s in that environment that the majority of CTOs reported both a rise in demand for online education and a lack of formal AI governance: 31 percent say their institution hasn’t created any AI use policies, including those that address teaching, research, student services and administrative tasks.
Similar to last year’s survey results, just 11 percent of CTOs indicate their institution has a comprehensive AI strategy, while about half (53 percent) believe their institution puts more emphasis on thinking about AI for individual use cases than thinking about it at an enterprise scale.
“AI has implications for every single area of an organization. It’s not just another technology we have to learn. It’s much broader than that,” van der Kaay said. “AI has us not only thinking about how we’re doing things but why we’re doing them, which is why it’s important to have that enterprise-level thinking in using these tools. If we’re just trying to use AI to accomplish things based on decades-old policies, processes, procedures—that’s not the most effective use.”
Ultimately, van der Kaay said he’s “optimistic that it’s giving us an opportunity here to make a lot of meaningful change.”
Digital Divides and Risks Persist
But the rise of AI has also heightened long-standing problems for colleges and universities, including access divides and cybersecurity concerns.
As the technology allows hackers to carry out larger-scale, more sophisticated breaches, only three in 10 CTOs are highly confident their college’s practices can prevent cyberattackers from compromising data and intellectual property, or launching a ransomware event. Van der Kaay said that while this likely reflects the cautious mindset of many CTOs, creating sound cybersecurity policy underscores the need for a cohesive, campuswide technology strategy.
“You don’t want an IT department just locking down stuff without working collaboratively with the faculty and staff to make sure there’s no impact on the learning process,” he said, noting that cybersecurity systems are also expensive. “If CTOs are not engaged with senior leadership and education planning at the highest level, that’s a problem.”
Beyond internal discussions and challenges, external influences are forcing rapid changes to the resources, focus and delivery of higher education.
Since President Donald Trump began his second term in January, his administration has cut billions in federal research funding to higher education institutions, leaving even wealthy institutions with craters in their budgets. At the same time, large technology companies are marketing AI-driven products to colleges and students as tools capable of moving the needle on student success—though many in the academic community are still skeptical of those claims.
Student success is also top of mind for CTOs surveyed, including 68 percent who say leveraging data for student success insights is a high or essential priority in digital transformation efforts and 59 percent who say the same of teaching and learning. While 39 percent of CTOs say their institution has set specific goals for digital transformation, none has yet achieved a complete transformation.
Commonly cited barriers to meeting those digital transformation goals are insufficient number of IT personnel, insufficient financial investment and data-quality and/or integration issues.
More on Tech and Student Success
“Data by itself is fine, but it just tells you what’s wrong,” said Glenda Morgan, an education technology market analyst for Phil Hill and Associates. “But you need to take action after, which is harder.” She added that taking effective action to improve student outcomes is even more urgent as of this week, after House Republicans on the Education and the Workforce Committee advanced a bill known as the Student Success and Taxpayer Savings Plan, which would create a risk-sharing program making colleges partially responsible for unpaid student loans.
“Emerging technologies do have a role to play, but probably not as much as many vendors and CTOs might think,” Morgan said. “You need the data to make the moves, but it also needs to be linked to student journeys.”
Days before the House advanced that bill, Trump issued an executive order calling for AI literacy in K-12 schools through public-private partnerships with AI industry groups, nonprofits and academic institutions that will develop those resources.
The results of that AI literacy directive will have implications for higher education, too. While school districts may start requiring their teachers to start using specific education-technology products, university instructors have more autonomy in how they choose to incorporate technology—if at all.
“We’re going to have to respond to that by going to state legislative bodies to get funding to make sure our faculty are prepared to teach AI-literate students and that our students are prepared to go into the workforce,” said Marc Watkins, a lecturer in creative writing and assistant director of academic innovation at the University of Mississippi. “AI isn’t going away; it’s only becoming more advanced. If you don’t actually have a plan to start thinking about what it’s going to look like over the next five years, it’s going to be incredibly hard to catch up.”
But getting the resources to make that happen won’t be like “waving a magic wand,” Watkins emphasized. “It’s going to take time, and a lot of thoughtful purchases and initiatives that involve human beings. It’s not just flipping a switch.”
While some institutions, such as the California State University system, have already made big investments in giving every student access to generative AI tools, the CTO survey suggests that half of colleges don’t grant students access to such tools. And those disparities will only deepen at universities that don’t invest in AI or create comprehensive policies that translate into action.
“You can have a vision statement about AI, but if every school, department and teacher has their own say about how to incorporate AI, it creates a difficult situation to navigate,” Watkins said. “For students, it’s nagging to think about what they should be expected to know about generative AI. How can they be AI-literate and workforce-ready when many faculty still think it’s cheating? We need to have open conversations about how AI is changing knowledge.”