Blog

  • Why our critics’ whataboutery over Jimmy Kimmel is wrong

    Why our critics’ whataboutery over Jimmy Kimmel is wrong

    Jimmy Kimmel just got suspended for having an opinion. 

    Specifically, ABC pulled Jimmy Kimmel Live! off the air following backlash from the head of the Federal Communications Commission over remarks Kimmel made last week about the murder of Charlie Kirk. We have been vocal in our opposition to this, and as a result, we’ve received some criticism online. So let’s take a minute to consider some of the most common remarks and address those concerns.

    We said an extraordinary amount of things. Let’s start with filing amicus briefs in two Supreme Court cases about government pressure on private companies last term, NRA v. Vullo and Murthy v. Missouri

    Vullo was about the superintendent of New York’s Department of Financial Services pressuring insurance companies and banks to stop providing services to the National Rifle Association. The Department didn’t quite order the companies not to work with the NRA. Instead, it met with the groups, pointed out regulatory violations that didn’t have anything to do with the NRA, and suggested the state might be less interested in pursuing those regulations if the companies “ceased providing insurance to gun groups, especially the NRA.” 

    As our brief explains: 

    The First Amendment does not permit the government to censor speech via informal or indirect means. When government officials “invok[e] legal sanctions and other means of coercion, persuasion, and intimidation” to chill disfavored speech, they impose “a scheme of state censorship” just as unlawful as direct regulation.

    The Supreme Court unanimously agreed. 

    In Murthy, the states of Missouri and Louisiana and a handful of individual social media users sued various federal agencies and officials, arguing that content moderation private companies conducted during COVID-19 and the 2020 election was driven by “jawboning” — that is informal, coercive government pressure — from the Biden administration. Our brief argued that such informal pressure violates the First Amendment.

    We wrote

    Although much attention has focused on the power of “Big Tech,” it is a bad idea for government officials to huddle in back rooms with corporate honchos to decide which social media posts are “truthful” or “good” while insisting, Wizard of Oz-style, “pay no attention to that man behind the curtain.” No matter how concerning it may be when private decisionmakers employ opaque or unwise moderation policies, allowing government actors to surreptitiously exercise control is far worse.

    There’s simply no daylight between our positions in these cases and our position in the current FCC controversy.

    In terms of advocacy, we stood up for a professor accused of bias after writing that if you question the lab origin of COVID-19, you should “at least consider that you are an idiot who is swallowing a whole lot of Chinese cock swaddle.” We argued against a University of Iowa policy limiting what faculty could say about masks and the virus. We criticized NYU for telling medical faculty not to talk to the press. We stood up for RAs whose university told them they couldn’t talk about it in Virginia, and RAs told they couldn’t criticize their university’s response in Maryland

    In research, last year we wrote the Fire Report on Social Media 2024, where we expanded on the problem of jawboning, and specifically how it occurred during the Biden administration. We noted, based on a survey from FIRE and IPSOS, that:

    The American public is concerned about this issue. Our polling found that 77% of Americans believe it is important for social media companies to “[be] fully transparent about any government involvement in content moderation decisions,” including 79% of Democrats, 75% of Republicans, and 81% of independents.

    So, yes. We have been loud, clear, and consistent on this issue for quite some time.

    X post 2

    In fact, we opposed cancel culture then and we oppose cancel culture now. And respectfully, with affection to our fellow co-workers, we’re not what you’d generally consider “key cultural tastemakers.”

    X post 3

    Actually, we have taken a public stand on people’s right to protest on either side of the abortion debate. Until 2022, we were focused on college campuses, so abortion clinics generally weren’t in the mission, but we did stand up for pro-life students and their right to express themselves. Just as we stood up for pro-choice students. And pro-marijuana students. And pro-Nicki Minaj students. (There are no anti-Nicki Minaj students. That’s a scientific fact. But if there were, we’d stand up for them.)

    But since we have expanded to cover freedom of expression on campus and off, Sarah McLaughlin, senior scholar of global expression at FIRE, has drawn attention to cases exactly like the one you cite.

    X post 4

    Not only did we not ban or deplatform any rights during Covid, we vehemently oppose speech bans and deplatforming. In fact, we keep a Campus Deplatforming Database to help monitor and curb such practices. And we have spoken out against government efforts to police disinformation as well as misinformation.

    X post 5

    Yes, but we were a campus-only organization at the time, and by the time we expanded in 2022, she had already sued. (Some of us were pissed personally that they ruined The Mandalorian to do it, if that helps.) Also, there’s a difference between a company caving to a cancel-culture campaign like the one that targeted Carano, and a company making a similar decision under threats from the government. Yes, both are bad for free-speech culture. But one is prohibited by the Constitution.

    Source link

  • CHRO Conversations – CUPA-HR

    CHRO Conversations – CUPA-HR

    Amid the changes and uncertainty higher ed institutions are navigating, chief HR officers (CHROs) are providing critical leadership to their institutions and workforce. To support those efforts, our new CHRO Conversations offer monthly opportunities for CHROs to hear about strategies being implemented on other campuses, to discuss solutions to emerging challenges, and to connect with one another in a space where conversation is encouraged.

    Hosted by CUPA-HR, each 60-minute virtual session features knowledgeable and experienced higher ed HR leaders sharing real-world insights on a timely topic — from workforce planning and HR systems to compensation and strategic leadership. Each session includes breakout discussions to create a high-impact experience tailored to the demands of today’s HR leaders navigating ambiguity and change.

    Important: These events are open only to higher ed chief HR officers at CUPA-HR member institutions. Seats are limited to support interaction among participants, so come prepared to engage, reflect and share ideas. These events will not be recorded.

    Source link

  • Breaking Away from Rankings – Edu Alliance Journal

    Breaking Away from Rankings – Edu Alliance Journal

    The Growing Movement to Reform Research Assessment and Rankings

    By Dean Hoke, September 22, 2025: For the past fifteen years, I have been closely observing what can only be described as a worldwide fascination—if not obsession—with university rankings, whether produced by Times Higher Education, QS, or U.S. News & World Report. In countless conversations with university officials, a recurring theme emerges: while most acknowledge that rankings are often overused by students, parents, and even funders when making critical decisions, few deny their influence. Nearly everyone agrees that rankings are a “necessary evil”—flawed, yet unavoidable—and many institutions still direct significant marketing resources toward leveraging rankings as part of their recruitment strategies.

    It is against this backdrop of reliance and ambivalence that recent developments, such as Sorbonne University’s decision to withdraw from THE rankings, deserve closer attention

    In a move that signals a potential paradigm shift in how universities position themselves globally, Sorbonne University recently announced it will withdraw from the Times Higher Education (THE) World University Rankings starting in 2026. This decision isn’t an isolated act of defiance—Utrecht University had already left THE in 2023, and the Coalition for Advancing Research Assessment (CoARA), founded in 2022, has grown to 767 members by September 2025. Together, these milestones reflect a growing international movement that questions the very foundations of how we evaluate academic excellence.

    The Sorbonne Statement: Quality Over Competition

    Sorbonne’s withdrawal from THE rankings isn’t merely about rejecting a single ranking system. It appears to be a philosophical statement about what universities should stand for in the 21st century. The institution has made it clear that it refuses to be defined by its position in what it sees as commercial ranking matrices that reduce complex academic institutions to simple numerical scores.

    Understanding CoARA: The Quiet Revolution

    The Coalition for Advancing Research Assessment represents one of the most significant challenges to traditional academic evaluation methods in decades. Established in 2022, CoARA has grown rapidly to include 767 member organizations as of September 2025. This isn’t just a European phenomenon—though European institutions have been early and enthusiastic adopters. The geographic distribution of CoARA members tells a compelling story about where resistance to traditional ranking systems is concentrated. As the chart shows, European countries dominate participation, led by Spain and Italy, with strong engagement also from Poland, France, and several Nordic countries. This European dominance isn’t accidental—the region’s research ecosystem has long been concerned about the Anglo-American dominance of global university rankings and the way these systems can distort institutional priorities.

    The Four Pillars of Reform

    CoARA’s approach centers on four key commitments that directly challenge the status quo:

    1. Abandoning Inappropriate Metrics The agreement explicitly calls for abandoning “inappropriate uses of journal- and publication-based metrics, in particular inappropriate uses of Journal Impact Factor (JIF) and h-index.” This represents a direct assault on the quantitative measures that have dominated academic assessment for decades.

    2. Avoiding Institutional Rankings Perhaps most relevant to the Sorbonne’s decision, CoARA commits signatories to “avoid the use of rankings of research organisations in research assessment.” This doesn’t explicitly require withdrawal from ranking systems, but it does commit institutions to not using these rankings in their own evaluation processes.

    3. Emphasizing Qualitative Assessment The coalition promotes qualitative assessment methods, including peer review and expert judgment, over purely quantitative metrics. This represents a return to more traditional forms of academic evaluation, albeit updated for modern needs.

    4. Responsible Use of Indicators Rather than eliminating all quantitative measures, CoARA advocates for the responsible use of indicators that truly reflect research quality and impact, rather than simply output volume or citation counts.

    European Leadership

    Top 10 Countries by CoARA Membership:

    The geographic distribution of CoARA members tells a compelling story about where resistance to traditional ranking systems is concentrated. As the chart shows, European countries dominate participation, led by Spain and Italy, with strong engagement also from Poland, France, and several Nordic countries. This European dominance isn’t accidental—the region’s research ecosystem has long been concerned about the Anglo-American dominance of global university rankings and the way these systems can distort institutional priorities.

    The geographic distribution of CoARA members tells a compelling story about where

    Prestigious European universities like ETH Zurich, the University of Zurich, Politecnico di Milano, and the University of Manchester are among the members, lending credibility to the movement. However, the data reveals that the majority of CoARA members (84.4%) are not ranked in major global systems like QS, which adds weight to critics’ arguments about institutional motivations.

    CoARA Members Ranked vs Not Ranked in QS:

    The Regional Divide: Participation Patterns Across the Globe

    What’s particularly striking about the CoARA movement is the relative absence of U.S. institutions. While European universities have flocked to join the coalition, American participation remains limited. This disparity reflects fundamental differences in how higher education systems operate across regions.

    American Participation: The clearest data we have on institutional cooperation with ranking systems comes from the United States. Despite some opposition to rankings, 78.1% of the nearly 1,500 ranked institutions returned their statistical information to U.S. News in 2024, showing that the vast majority of American institutions remain committed to these systems. However, there have been some notable American defections. Columbia University is among the latest institutions to withdraw from U.S. News & World Report college rankings, joining a small but growing list of American institutions questioning these systems. Yet these remain exceptions rather than the rule.

    European Engagement: While we don’t have equivalent participation rate statistics for European institutions, we can observe their engagement patterns differently. 688 universities appear in the QS Europe ranking for 2024, and 162 institutions from Northern Europe alone appear in the QS World University Rankings: Europe 2025. However, European institutions have simultaneously embraced the CoARA movement in large numbers, suggesting a more complex relationship with ranking systems—continued participation alongside philosophical opposition.

    Global Participation Challenges: For other regions, comprehensive participation data is harder to come by. The Arab region has 115 entries across five broad areas of study in QS rankings, but these numbers reflect institutional inclusion rather than active cooperation rates. It’s important to note that some ranking systems use publicly available data regardless of whether institutions actively participate or cooperate with the ranking organizations.

    This data limitation itself is significant—the fact that we have detailed participation statistics for American institutions but not for other regions may reflect the more formalized and transparent nature of ranking participation in the U.S. system versus other global regions.

    American universities, particularly those in the top tiers, have largely benefited from existing ranking systems. The global prestige and financial advantages that come with high rankings create powerful incentives to maintain the status quo. For many American institutions, rankings aren’t just about prestige—they’re about attracting international students, faculty, and research partnerships that are crucial to their business models.

    Beyond Sorbonne: Other Institutional Departures

    Sorbonne isn’t alone in taking action. Utrecht University withdrew from THE rankings earlier, citing concerns about the emphasis on scoring and competition. These moves suggest that some institutions are willing to sacrifice prestige benefits to align with their values. Interestingly, the Sorbonne has embraced alternative ranking systems such as the Leiden Open Rankings, which highlight its impact.

    The Skeptics’ View: Sour Grapes or Principled Stand?

    Not everyone sees moves like Sorbonne’s withdrawal as a noble principle. Critics argue that institutions often raise philosophical objections only after slipping in the rankings. As one university administrator put it: “If the Sorbonne were doing well in the rankings, they wouldn’t want to leave. We all know why self-assessment is preferred. ‘Stop the world, we want to get off’ is petulance, not policy.”

    This critique resonates because many CoARA members are not major players in global rankings, which fuels suspicion that reform may be as much about strategic positioning as about values. For skeptics, the call for qualitative peer review and expert judgment risks becoming little more than institutions grading themselves or turning to sympathetic peers.

    The Stakes: Prestige vs. Principle

    At the heart of this debate is a fundamental tension: Should universities prioritize visibility and prestige in global markets, or focus on measures of excellence that reflect their mission and impact? For institutions like the Sorbonne, stepping away from THE rankings is a bet that long-term reputation will rest more on substance than on league table positions. But in a globalized higher education market, the risk is real—rankings remain influential signals to students, faculty, and research partners.
    Rankings also exert practical influence in ways that reformers cannot ignore. Governments frequently use global league tables as benchmarks for research funding allocations or as part of national excellence initiatives. International students, particularly those traveling across continents, often rely on rankings to identify credible destinations, and faculty recruitment decisions are shaped by institutional prestige. In short, rankings remain a form of currency in the global higher education market.

    This is why the decision to step away from them carries risk. Institutions like the Sorbonne and Utrecht may gain credibility among reform-minded peers, but they could also face disadvantages in attracting international talent or demonstrating competitiveness to funders. Whether the gamble pays off will depend on whether alternative measures like CoARA or ROI rankings achieve sufficient recognition to guide these critical decisions.

    The Future of Academic Assessment

    The CoARA movement and actions like Sorbonne’s withdrawal represent more than dissatisfaction with current ranking systems—they highlight deeper questions about what higher education values in the 21st century. If the movement gains further momentum, it could push institutions and regulators to diversify evaluation methods, emphasize collaboration over competition, and give greater weight to societal impact.

    Yet rankings are unlikely to disappear. For students, employers, and funders, they remain a convenient—if imperfect—way to compare institutions across borders. The practical reality is that rankings will continue to coexist with newer approaches, even as reform efforts reshape how universities evaluate themselves internally.

    Alternative Rankings: The Rise of Outcome-Based Assessment

    While CoARA challenges traditional rankings, a parallel trend focuses on outcome-based measures such as return on investment (ROI) and career impact. Georgetown University’s Center on Education and the Workforce, for example, ranks more than 4,000 colleges on the long-term earnings of their graduates. Its findings tell a very different story than research-heavy rankings—Harvey Mudd College, which rarely appears at the top of global research lists, leads ROI tables with graduates projected to earn $4.5 million over 40 years.

    Other outcome-oriented systems, such as The Princeton Review’s “Best Value” rankings, emphasize affordability, employment, and post-graduation success. These approaches highlight institutions that may be overlooked by global research rankings but deliver strong results for students. Together, they represent a pragmatic counterbalance to CoARA’s reform agenda, showing that students and employers increasingly want measures of institutional value beyond research metrics alone.

    These alternative models can be seen most vividly in rankings that emphasize affordability and career outcomes. *The Princeton Review’s* “Best Value” rankings, for example, combine measures of financial aid, academic rigor, and post-graduation outcomes to highlight institutions that deliver strong returns for students relative to their costs. Public universities often rise in these rankings, as do specialized colleges that may not feature prominently in global research tables.

    Institutions like the Albany College of Pharmacy and Health Sciences illustrate this point. Although virtually invisible in global rankings, Albany graduates report median salaries of $124,700 just ten years after graduation, placing the college among the best in the nation on ROI measures. For students and families making education decisions, data like this often carries more weight than a university’s position in QS or THE.

    Together with Georgetown’s ROI rankings and the example of Harvey Mudd College, these cases suggest that outcome-based rankings are not marginal alternatives—they are becoming essential tools for understanding institutional value in ways that matter directly to students and employers.

    Rankings as Necessary Evil: The Practical Reality

    The CoARA movement and actions like Sorbonne’s withdrawal represent more than just dissatisfaction with current ranking systems. They reflect deeper questions about the values and purposes of higher education in the 21st century.

    If the movement gains momentum, we could see:

    Diversification of evaluation methods, with different regions and institution types developing assessment approaches that align with their specific values and goals

    Reduced emphasis on competition between institutions in favor of collaboration and shared improvement

    Greater focus on societal impact rather than purely academic metrics

    More transparent and open assessment processes that allow for a better understanding of institutional strengths and contributions

    Conclusion: Evolution, Not Revolution

    The Coalition for Advancing Research Assessment and decisions like Sorbonne’s withdrawal from THE rankings represent important challenges to how we evaluate universities, but they signal evolution rather than revolution. Instead of the end of rankings, we are witnessing their diversification. ROI-based rankings, outcome-focused measures, and reform initiatives like CoARA now coexist alongside traditional global league tables, each serving different audiences.

    Skeptics may dismiss reform as “sour grapes,” yet the concerns CoARA raises about distorted incentives and narrow metrics are legitimate. At the same time, American resistance reflects both philosophical differences and the pragmatic advantages U.S. institutions enjoy under current systems.

    The most likely future is a pluralistic landscape: research universities adopting CoARA principles internally while maintaining a presence in global rankings for visibility; career-focused institutions highlighting ROI and student outcomes; and students, faculty, and employers learning to navigate multiple sources of information rather than relying on a single hierarchy.

    In an era when universities must demonstrate their value to society, conversations about how we measure excellence are timely and necessary. Whether change comes gradually or accelerates, the one-size-fits-all approach is fading. A more complex mix of measures is emerging—and that may ultimately serve students, institutions, and society better than the systems we are leaving behind. In the end, what many once described to me as a “necessary evil” may persist—but in a more balanced landscape where rankings are just one measure among many, rather than the single obsession that has dominated higher education for so long.


    Dean Hoke is Managing Partner of Edu Alliance Group, a higher education consultancy. He formerly served as President/CEO of the American Association of University Administrators (AAUA). Dean has worked with higher education institutions worldwide. With decades of experience in higher education leadership, consulting, and institutional strategy, he brings a wealth of knowledge on colleges’ challenges and opportunities. Dean is the Executive Producer and co-host for the podcast series Small College America.

    Source link

  • Turning the Corner | HESA

    Turning the Corner | HESA

    Things have been bleak in higher education the last couple of years, and no doubt they will remain bleak for a while. But it recently became clear to me how we’ll know that we are turning the corner: it will be the moment when provincial governments start allowing significant rises in domestic tuition.

    This became clear to me when I was having a discussion with a senior provincial official (in a province I shall not name) about tuition. I was arguing that with provincial budgets flat and declining international enrolment, domestic tuition needed to increase – and that there was plenty of room to do so given the affordability trends of the last couple of decades.

    What affordability trends, you ask? I’m glad you asked. Affordability is a ratio where the cost of a good or service is the numerator and some measure of ability to pay is the denominator. So, let’s look at what it takes to pay average tuition and fees. Figure 1 shows average tuition as a percentage of the median income of couple families and lone-parent families aged 45-54.  As you can see, for the average two-couple household, average tuition (which – recall last Wednesday’s blog – is an overestimate for most students) has never been more affordable in the twenty-first century. For lone-parent families, current levels of tuition are at a twenty-year low.

    Figure 1: Average Undergraduate Tuition and Fees as a Percentage of Median Family Income, Couple Family and Lone-Parent Families aged 45-54, Canada, 2000-2024

    Ah, you say, but that’s tuition as a function of parental ability-to-pay – what about students? Well, it’s basically the same story – calculated as a percentage of the average student wage, tuition has not been this cheap since the turn of the century, and in Ontario, it has dropped by 27% since 2017. And yes, the national story is to a large degree a function of what’s been going on in Ontario, but over the past decade or so, this ratio has been declining in all provinces except Manitoba, Saskatchewan and Alberta.

    Figure 2: Number of Hours Worked at Median Hourly Income for Canadians Aged 15-24 Required to pay Average Undergraduate Tuition and Fees, Canada and Ontario, 1997-2024

    And that’s before we even touch the issue of student aid, which as you all know is way up this century even after we take student population growth into account. In real dollars, we’ve gone from a $10B/year student aid system to a $20B/year system with the vast majority of growth coming on the non-repayable side, rather than from loans.

    Figure 3: Total Student Financial Assistance by Type, Selected years, 1993-94 to 2023-2024, in Millions, in $2023

    In fact, student aid expenditures are so high nowadays that across both universities and colleges we spend about $3 billion more in student aid than we take in from tuition fees. That’s NEGATIVE NET TUITION, PEOPLE.

    Figure 4: Aggregate Non-Repayable Aid vs Aggregate Domestic Tuition fees, 2007-08 to 2023-24, in Billions, in $2023

    So, yeah, affordability trends. They are much more favorable to students than most people think.

    Anyway, the provincial official seemed a bit nonplussed by my reply: my sense is that they had never been briefed on the degree to which tuition increases have been thrown into reverse these past few years, and he certainly didn’t know about the huge increase in non-repayable aid over the past few decades. They didn’t push back on any of this evidence, BUT, they insisted, tuition fees weren’t going up because doing so is hard and it’s unpopular.

    To which I responded: well, sure. But was raising tuition any easier or less unpopular in 1989 when the Quebec Liberal government more than doubled tuition? Than in the mid-90s when both the NDP and Conservative governments allowed tuition to rise? Than in 2001 when the BC Liberals allowed tuition to increase by 50%? This has been done before. There’s absolutely no reason it can’t be done again. The only thing it will take is the courage to put the requirements of institutions that actually build economies and societies ahead of the cheap, short-term sugar highs of chasing things like “affordability”. 

    Now, to be fair, I don’t for the moment see any provincial governments prepared to do this. If there is one thing that seems to unite provincial governments these days, it is an inability to make hard decisions. But this particular political moment won’t last forever. It might take a serious, long-term recession to knock it into various heads that no matter how much money we sink into them, natural resources and construction alone won’t run this economy. Eventually, we’re going to have to re-build the great college and university system we’re in the middle of trashing. 

    And we’ll know that moment has come when provincial governments agree that domestic tuition should rise again.

    Source link

  • The Next System Teach-Ins and Their Role in Higher Education

    The Next System Teach-Ins and Their Role in Higher Education

    In a time when higher education grapples with systemic challenges—rising tuition, debt burdens, underfunding, and institutional inertia—the Next System Teach-Ins emerge as a powerful catalyst for critical dialogue, community engagement, and transformative thinking.


    A Legacy of Teach-Ins: From Vietnam to System Change

    Teach-ins have long functioned as dynamic forums that transcend mere lecturing, incorporating participatory dialogue and strategic action. The concept originated in March 1965 at the University of Michigan in direct protest of the Vietnam War; faculty and students stayed up all night, creating an intellectual and activist space that sparked over 100 similar events in that year alone.

    This model evolved through the decades—fueling the environmental, civil rights, and anti-apartheid movements of the 1970s and 1980s, followed by the Democracy Teach-Ins of the 1990s which challenged corporate influence in universities and energized anti-sweatshop activism. Later waves during Occupy Wall Street and Black Lives Matter sustained teach-ins as a tool for inclusive dialogue and resistance.


    The Next System Teach-Ins: Vision, Scope, and Impact

    Vision and Purpose

    Launched in Spring 2016, the Next System Teach-Ins aimed to broaden public awareness of systemic alternatives to capitalism—ranging from worker cooperatives and community land trusts to decentralized energy systems and democratic public banking.

    These teach-ins were designed not just as academic discussion forums but as launching pads for community-led action, connecting participants with toolkits, facilitation guides, ready-made curricula, and resources to design their own events.

    Highlights of the Inaugural Wave

    In early 2016, notable teach-ins took place across the U.S.—from Madison and New York City to Seattle and beyond. Participants explored pressing questions such as, “What comes after capitalism?” and “How can communities co-design alternatives that are just, sustainable, and democratic?”

    These gatherings showcased a blend of plenaries, interactive workshops, radio segments, and “wall-to-wall” organizing strategies—mobilizing participants beyond attendee numbers into collective engagement.

    Resources and Capacity Building

    Organizers were provided with a wealth of support materials including modular curriculum, templates for publicity and RFPs, event agendas, speaker lists, and online infrastructure to manage RSVPs and share media.

    The goal was dual: ignite a nationwide conversation on alternative systemic models, and encourage each teach-in host to aim for a specific local outcome—whether that be a campus campaign, curriculum integration, or forming ongoing community groups.


    2025: Renewed Momentum

    The Next System initiative has evolved. According to a May 2025 update from George Mason University’s Next System Studies, a new wave of Next System Teach-Ins is scheduled for November 1–16, 2025.

    This iteration amplifies the original mission: confronting interconnected social, ecological, political, and economic crises by gathering diverse communities—on campuses, in union halls, or public spaces—to rethink, redesign, and rebuild toward a more equitable and sustainable future.


    Why This Matters for Higher Education (HEI’s Perspective)

    Teach-ins revitalize civic engagement on campus by reasserting higher education’s role as an engine of critical thought and imagination.

    They integrate scholarship and practice, uniting theory with actionable strategies—from economic democracy to ecological regeneration—and enrich academic purpose with real-world relevance.

    They also mobilize institutional infrastructure, offering student-led exploration of systemic change without requiring prohibitive resources.

    By linking the global and the local, teach-ins equip universities to address both planetary crises and campus-specific challenges.

    Most importantly, they trigger systemic dialogue, pushing past complacency and fostering a new generation of system-thinking leaders.


    Looking Ahead: Institutional Opportunities

    • Host a Teach-In – Whether a focused film screening, interdisciplinary workshop, or full-scale weekend event, universities can leverage Next System resources to design context-sensitive, action-oriented programs.

    • Embed in Curriculum – The modular material—especially case studies on democratic economics, energy justice, or communal models—can integrate into courses in sociology, environmental studies, governance, and beyond.

    • Forge Community Partnerships – By extending beyond campus (to community centers, labor unions, public libraries), teach-ins expand access and deepen impact.

    • Contribute to a National Movement – University participation in the November 2025 wave positions institutions as active contributors to a growing ecosystem of systemic transformation.


    A Bold Experiment

    The Next System Teach-Ins represent a bold experiment in higher education’s engagement with systemic change. Combining rich traditions of activism with pragmatic tools for contemporary challenges, these initiatives offer HEI a blueprint for meaningful civic education, collaborative inquiry, and institutional transformation.

    As the 2025 wave approaches, universities have a timely opportunity to be centers of both reflection and action in building the next system we all need.


    Sources

    Source link

  • Online Course Gives College Students a Foundation on GenAI

    Online Course Gives College Students a Foundation on GenAI

    As more employers identify uses for generative artificial intelligence in the workplace, colleges are embedding tech skills into the curriculum to best prepare students for their careers.

    But identifying how and when to deliver that content has been a challenge, particularly given the varying perspectives different disciplines have on generative AI and when its use should be allowed. A June report from Tyton Partners found that 42 percent of students use generative AI tools at least weekly, and two-thirds of students use a singular generative AI tool like ChatGPT. A survey by Inside Higher Ed and Generation Lab found that 85 percent of students had used generative AI for coursework in the past year, most often for brainstorming or asking questions.

    The University of Mary Washington developed an asynchronous one-credit course to give all students enrolled this fall a baseline foundation of AI knowledge. The optional class, which was offered over the summer at no cost to students, introduced them to AI ethics, tools, copyright concerns and potential career impacts.

    The goal is to help students use the tools thoughtfully and intelligently, said Anand Rao, director of Mary Washington’s center for AI and the liberal arts. Initial results show most students learned something from the course, and they want more teaching on how AI applies to their majors and future careers.

    How it works: The course, IDIS 300: Introduction to AI, was offered to any new or returning UMW student to be completed any time between June and August. Students who opted in were added to a digital classroom with eight modules, each containing a short video, assigned readings, a discussion board and a quiz assignment. The class was for credit, graded as pass-fail, but didn’t fulfill any general education requirements.

    Course content ranged from how to use AI tools and prompt generative AI output to academic integrity, as well as professional development and how to critically evaluate AI responses.

    “I thought those were all really important as a starting point, and that still just scratches the surface,” Rao said.

    The course is not designed to make everyone an AI user, Rao said, “but I do want them to be able to speak thoughtfully and intelligently about the use of tools, the application of tools and when and how they make decisions in which they’ll be able to use those tools.”

    At the end of the course, students submitted a short paper analyzing an AI tool used in their field or discipline—its output, use cases and ways the tool could be improved.

    Rao developed most of the content, but he collaborated with campus stakeholders who could provide additional insight, such as the Honor Council, to lay out how AI use is articulated in the honor code.

    The impact: In total, the first class enrolled 249 students from a variety of majors and disciplines, or about 6 percent of the university’s total undergrad population. A significant number of the course enrollees were incoming freshmen. Eighty-eight percent of students passed the course, and most had positive feedback on the class content and structure.

    In postcourse surveys, 68 percent of participants indicated IDIS 300 should be a mandatory course or highly recommended for all students.

    “If you know nothing about AI, then this course is a great place to start,” said one junior, noting that the content builds from the basics to direct career applications.

    What’s next: Rao is exploring ways to scale the course in the future, including by developing intermediate or advanced classes or creating discipline-specific offerings. He’s also hoping to recruit additional instructors, because the course had some challenges given its large size, such as conducting meaningful exchanges on the discussion board.

    The center will continue to host educational and discussion-based events throughout the year to continue critical conversations regarding generative AI. The first debate, centered on AI and the environment, aims to evaluate whether AI’s impact will be a net positive or negative over the next decade, Rao said.

    The university is also considering ways to engage the wider campus community and those outside the institution with basic AI knowledge. IDIS 300 content will be made available to nonstudents this year as a Canvas page. Some teachers in the local school district said they’d like to teach the class as a dual-enrollment course in the future.

    Get more content like this directly to your inbox. Subscribe here.

    Source link

  • A Better Way to Prepare for Job Interviews (opinion)

    A Better Way to Prepare for Job Interviews (opinion)

    One of the things that makes interviews stressful is their unpredictability, which is unfortunately also what makes them so hard to prepare for. In particular, it’s impossible to predict exactly what questions you will be asked. So, how do you get ready?

    Scripting out answers for every possible question is a popular strategy but a losing battle. There are too many (maybe infinite?) possible questions and simply not enough time. In the end, you’ll spend all your time writing and you still won’t have answers to most of the questions you might face. And while it might make you feel briefly more confident, that confidence is unlikely to survive the stress and distress of the actual interview. You’ll be rigid rather than flexible, robotic rather than responsive.

    This article outlines an interview-preparation strategy that is both easier and more effective than frantic answer scripting, one that will leave you able to answer just about any interview question smoothly.

    Step 1: Themes

    While you can’t know what questions you will get, you can pretty easily predict many of the topics your interviewers will be curious about. You can be pretty sure that an interviewer will be interested in talking about collaboration, for example, even if you can’t say for sure whether they’ll ask a standard question like “Tell us about a time when you worked with a team to achieve a goal” or something weirder like “What role do you usually play on a team?”

    Your first step is to figure out the themes that their questions are most likely to touch on. Luckily, I can offer you a starter pack. Here are five topics that are likely to show up in an interview for just about any job, so it pays to prepare for them no matter what:

    1. Communication
    2. Collaboration (including conflict!)
    3. Time and project management
    4. Problem-solving and creativity
    5. Failures and setbacks

    But you also need to identify themes that are specific to the job or field you are interviewing for. For a research and development scientist position, for example, an interviewer might also be interested in innovation and scientific thinking. For a project or product manager position, they’ll probably want to know about stakeholder management. And so on.

    To identify these specific themes, check the job ad. They may have already identified themes for you by categorizing the responsibilities or qualifications, or you can just look for patterns. What topics/ideas/words come up most often in the ad that aren’t already represented in the starter pack? What kinds of skills or experience are they expecting? If you get stuck, try throwing the ad into a word cloud generator and see what it spits out.

    Ideally, try to end this step with at least three new themes, in addition to the starter pack.

    Step 2: Stories

    The strongest interview answers are anchored by a specific story from your experience, which provides a tangible demonstration about how you think and what you can do. But it’s incredibly difficult to come up with a good, relevant example in the heat of an interview, let alone to tell it effectively in a short amount of time. For that, you need some preparation and some practice.

    So for each of your themes, identify two to three relevant stories. Stories can be big (a whole project from beginning to end), or they can be small (a single interaction with a student). They can be hugely consequential (a decision you made that changed the course of your career), or they can be minor but meaningful (a small disagreement you handled well). What is most important is that the stories demonstrate your skills, experiences and attitudes clearly and compellingly.

    The point is to have a lot of material to work with, so aim for at least 10 stories total, and preferably more. The same story can apply to multiple themes, but try not to let double-dipping limit the number of stories you end up with.

    Then, for each of your stories, write an outline that gives just enough context to understand the situation, describes your actions and thinking, and says what happened at the end. Use the STAR method, if it’s useful for keeping your stories tight and focused. Shaping your stories and deciding what to say (and not say) will help your audience see your skills in action with minimal distractions. This is one of the most important parts of your prep, so take your time with it.

    Step 3: Approaches

    As important as stories are in interviewing, you usually can’t just respond to a question with a story without any framing or explanation. So you’ll want to develop language to describe some of your usual strategies, orientations or approaches to situations that fall into each of the themes. That language will help you easily link each question to one of your stories.

    So for each theme, do a little brainstorming to identify your core messaging: “What do I usually do when faced with a situation related to [THEME]?” Then write a few bullet points. (You can also reverse engineer this from the stories: Read the stories linked to a particular theme, then look for patterns in your thinking or behavior.)

    These bullet points give you what you need to form connective tissue between the specific question they ask and the story you want to tell. So if they ask, “Tell me about a time when you worked with a team to achieve a goal,” you can respond with a story and close out by describing how that illustrates a particular approach. Or if they ask, “What role do you usually play on a team?” you can start by describing how you think about collaboration and your role in it and then tell a story that illustrates that approach.

    Though we are focusing on thematic questions here, make sure to also prepare bullet points for some of the most common general interview questions, like “Why do you want this job?” and “Tell us about yourself.”

    Step 4: Bring It All Together

    You really, really, really need to practice out loud before your interview. Over the years, I’ve found that many of the graduate students and postdocs I work with spend a lot of time thinking about how they might answer questions and not nearly enough time actually trying to answer them. And so they miss the opportunity to develop the kind of fluency and flexibility that helps one navigate the unpredictable environment of an interview.

    Here’s how to use the prep you did in Steps 1-3 to practice:

    • First, practice telling each of your 10-plus stories out loud, at least three times each. The goal here is to develop fluency in your storytelling, so you can keep things focused and flowing without needing to think about it.
    • Second, for each of the bullet points you created in Step 3, practice explaining it (out loud!) a few times, ideally in a couple of different ways.
    • Third, practice bringing it all together by answering some actual interview questions. Find a long list of interview questions (like this one), then pick questions at random to answer. The randomness is important, because the goal is to practice making smooth and effective connections between questions, stories and approaches. You need to figure out what to do when you run into a question that is challenging, unexpected or just confusing.
    • And once you’ve done that, do it all again.

    In the end, you’ve created a set of building blocks that you can arrange and rearrange as needed in the moment. And it’s a set you can keep adding to with more stories and more themes, keep practicing with new questions, and keep adapting for your next interview.

    Derek Attig is assistant dean for career and professional development in the Graduate College of the University of Illinois at Urbana-Champaign. Derek is a member of the Graduate Career Consortium, an organization providing an international voice for graduate-level career and professional development leaders.

    Source link

  • Why Did College Board End Best Admissions Product? (opinion)

    Why Did College Board End Best Admissions Product? (opinion)

    Earlier this month, College Board announced its decision to kill Landscape, a race-neutral tool that allowed admissions readers to better understand a student’s context for opportunity. After an awkward 2019 rollout as the “Adversity Score,” Landscape gradually gained traction in many selective admissions offices. Among other items, the dashboard provided information on the applicant’s high school, including the economic makeup of their high school class, participation trends for Advanced Placement courses and the school’s percentile SAT scores, as well as information about the local community.

    Landscape was one of the more extensively studied interventions in the world of college admissions, reflecting how providing more information about an applicant’s circumstances can boost the likelihood of a low-income student being admitted. Admissions officers lack high-quality, detailed information on the high school environment for an estimated 25 percent of applicants, a trend that disproportionately disadvantages low-income students. Landscape helped fill that critical gap.

    While not every admissions office used it, Landscape was fairly popular within pockets of the admissions community, as it provided a more standardized, consistent way for admissions readers to understand an applicant’s environment. So why did College Board decide to ax it? In its statement on the decision, College Board noted that “federal and state policy continues to evolve around how institutions use demographic and geographic information in admissions.” The statement seems to be referring to the Trump administration’s nonbinding guidance that institutions should not use geographic targeting as a proxy for race in admissions.

    If College Board was worried that somehow people were using the tool as a proxy for race (and they weren’t), well, it wasn’t a very good one. In the most comprehensive study of Landscape being used on the ground, researchers found that it didn’t do anything to increase racial/ethnic diversity in admissions. Things are different when it comes to economic diversity. Use of Landscape is linked with a boost in the likelihood of admission for low-income students. As such, it was a helpful tool given the continued underrepresentation of low-income students at selective institutions.

    Still, no study to date found that Landscape had any effect on racial/ethnic diversity. The findings are unsurprising. After all, Landscape was, to quote College Board, “intentionally developed without the use or consideration of data on race or ethnicity.” If you look at the laundry list of items included in Landscape, absent are items like the racial/ethnic demographics of the high school, neighborhood or community.

    While race and class are correlated, they certainly aren’t interchangeable. Admissions officers weren’t using Landscape as a proxy for race; they were using it to compare a student’s SAT score or AP course load to those of their high school classmates. Ivy League institutions that have gone back to requiring SAT/ACT scores have stressed the importance of evaluating test scores in the student’s high school context. Eliminating Landscape makes it harder to do so.

    An important consideration: Even if using Landscape were linked with increased racial/ethnic diversity, its usage would not violate the law. The Supreme Court recently declined to hear the case Coalition for TJ v. Fairfax County School Board. In declining to hear the case, the court has likely issued a tacit blessing on race-neutral methods to advance diversity in admissions. The decision leaves the Fourth Circuit opinion, which affirmed the race-neutral admissions policy used to boost diversity at Thomas Jefferson High School for Science and Technology, intact.

    The court also recognized the validity of race-neutral methods to pursue diversity in the 1989 case J.A. Croson v. City of Richmond. In a concurring opinion filed in Students for Fair Admission (SFFA) v. Harvard, Justice Brett Kavanaugh quoted Justice Antonin Scalia’s words from Croson: “And governments and universities still ‘can, of course, act to undo the effects of past discrimination in many permissible ways that do not involve classification by race.’”

    College Board’s decision to ditch Landscape sends an incredibly problematic message: that tools to pursue diversity, even economic diversity, aren’t worth defending due to the fear of litigation. If a giant like College Board won’t stand behind its own perfectly legal effort to support diversity, what kind of message does that send? Regardless, colleges and universities need to remember their commitments to diversity, both racial and economic. Yes, post-SFFA, race-conscious admissions has been considerably restricted. Still, despite the bluster of the Trump administration, most tools commonly used to expand access remain legal.

    The decision to kill Landscape is incredibly disappointing, both pragmatically and symbolically. It’s a loss for efforts to broaden economic diversity at elite institutions, yet another casualty in the Trump administration’s assault on diversity. Even if the College Board has decided to abandon Landscape, institutions must not forget their obligations to make higher education more accessible to low-income students of all races and ethnicities.

    Source link

  • Selecting and Supporting New Vice Chancellors: Reflections on Process & Practice – PART 1 

    Selecting and Supporting New Vice Chancellors: Reflections on Process & Practice – PART 1 

    • This HEPI blog was kindly authored by Dr Tom Kennie, Director of Ranmore.
    • Over the weekend, HEPI director Nick Hillman blogged about the forthcoming party conferences and the start of the new academic year. Read more here.

    Introduction 

    Over the last few months, a number of well-informed commentators have focused on understanding the past, present and to some extent, future context associated with the appointment of Vice Chancellors in the UK. See Tessa Harrison and Josh Freeman of Gatensby Sanderson Jamie Cumming-Wesley of WittKieffer and Paul Greatrix

    In this and a subsequent blog post, I want to complement these works with some practice-informed reflections from my work with many senior higher education leaders. I also aim to open a debate about optimising the selection and support for new Vice Chancellors by challenging some current practices. 

    Reflections to consider when recruiting Vice Chancellors 

    Adopt a different team-based approach 

    Clearly, all appointment processes are team-based – undertaken by a selection committee. For this type of appointment, however, we need a different approach which takes collective responsibility as a ‘Selection and Transition Team’. What’s the difference? In this second approach, the team take a wider remit with responsibility for the full life cycle of the process from search to selection to handover and transition into role. The team also oversee any interim arrangements if a gap in time exists between the existing leader leaving and the successor arriving. This is often overlooked.  

    The Six Keys to a Successful Presidential Transition is an interesting overview of this approach in Canada. 

    Pre-search diagnosis  

    Pre-search diagnosis (whether involving a search and selection firm or not) is often underestimated in its importance or is under-resourced. Before you start to search for a candidate to lead a university, you need to ensure those involved are all ‘on the same page’. Sometimes they are, but in other cases they fail to recognise that they are on the same, but wrong, page. Classically, this may be to find someone to lead the organisation of today, and a failure to consider the place they seek to be in 10 years. Before appointing a search firm, part of the solution is to ensure you have a shared understanding of the type of universityyou are seeking someone to lead.   

    • Role balance and capabilities 

    A further diagnostic issue, linked to the former point, is to be very clear about the balance of capabilities required in your selected candidate. One way of framing this is to assess the candidate balance across a number of dimensions, including:  

    • The Chief Academic Officer (CAO) capabilities; more operational and internally focussed. 
    • The Chief Executive Officer (CEO) capabilities; more strategic and initially internally focussed. 
    • The Chief Civic Officer (CCO) capabilities: more strategic and externally focussed; and 
    • The Chief Stakeholder Relationship Officer (CSRO): more operational and externally focussed. 

    All four matter. One astute Vice Chancellor suggested to me a fifth; Chief Storytelling Officer (CSO). 

    Search firm or not?   

    The decision as to whether to use a search firm is rarely considered today – it is assumed you will use one. It is, however, worth pausing to reflect on this issue, if only to be very clear about what you are seeking from a search firm. What criteria should you use to select one? Are you going with one who you already use, or have used, or are you open to new players (both to you and to the higher education market)? The latter might be relevant if you are seeking to extend your search to candidates who have a career trajectory beyond higher education.  

    ‘Listing’ – how and by whom?   

    Searching should lead to many potential candidates Selecting who to consider is typically undertaken through a long-listing process and from this a short-list is created. Make sure you understand how this will be undertaken and who will be doing it. When was the last time you asked to review the larger list from which the long list was taken?  

    Psychometrics – why, which and how? 

    A related matter involves the use of any psychometric instruments proposed to form part of the selection process. They are often included –yet the rationale for this is often unclear. As is the question of how the data will be used. Equally importantly, if the judgment is that it should be included, who should undertake the process? Whichever route you take, you would be wise to read Andrew Munro’s recent book on the topic, Personality Testing In Employee Selection: Challenges, Controversies and Future Directions 

    Balance questions with scenarios and dilemmas 

    Given the complexity of the role of the Vice Chancellor, it is clearly important to assess candidates across a wide range of criteria. Whilst a question-and-answer process can elicit some evidence, we should all be aware of the limitations of such a process. Complementing the former with a well-considered scenario-based processes involving a series of dilemmas, which candidates are invited to consider, is less common than it should be. 

    Rehearse final decision scenarios  

    If you are fortunate as a selection panel, after having considered many different sources of evidence, you will reach a collective, unanimous decision about the candidate you wish to offer the position. Job almost done. More likely, however, you will have more than one preferred candidate – each providing evidence to be appointable albeit with evidence of gaps in some areas. Occasionally, you may also have reached an impasse where strong cases are made to appoint two equally appointable candidates. Preparing for these situations by considering them in advance. In some cases, the first time such situations are considered are during the final stage of the selection exercise. 

    In part 2 I’ll focus more on support and how to ensure the leadership transition is given as much attention as candidate selection. 

    Source link

  • Future-Proof Students’ (and Our) Careers by Building Uniquely Human Capacities – Faculty Focus

    Future-Proof Students’ (and Our) Careers by Building Uniquely Human Capacities – Faculty Focus

    Source link