Amid the changes and uncertainty higher ed institutions are navigating, chief HR officers (CHROs) are providing critical leadership to their institutions and workforce. To support those efforts, our new CHRO Conversations offer monthly opportunities for CHROs to hear about strategies being implemented on other campuses, to discuss solutions to emerging challenges, and to connect with one another in a space where conversation is encouraged.
Hosted by CUPA-HR, each 60-minute virtual session features knowledgeable and experienced higher ed HR leaders sharing real-world insights on a timely topic — from workforce planning and HR systems to compensation and strategic leadership. Each session includes breakout discussions to create a high-impact experience tailored to the demands of today’s HR leaders navigating ambiguity and change.
Important:These events are open only to higher ed chief HR officers at CUPA-HR member institutions. Seats are limited to support interaction among participants, so come prepared to engage, reflect and share ideas. These events will not be recorded.
The Growing Movement to Reform Research Assessment and Rankings
By Dean Hoke, September 22, 2025: For the past fifteen years, I have been closely observing what can only be described as a worldwide fascination—if not obsession—with university rankings, whether produced by Times Higher Education, QS, or U.S. News & World Report. In countless conversations with university officials, a recurring theme emerges: while most acknowledge that rankings are often overused by students, parents, and even funders when making critical decisions, few deny their influence. Nearly everyone agrees that rankings are a “necessary evil”—flawed, yet unavoidable—and many institutions still direct significant marketing resources toward leveraging rankings as part of their recruitment strategies.
It is against this backdrop of reliance and ambivalence that recent developments, such as Sorbonne University’s decision to withdraw from THE rankings, deserve closer attention
In a move that signals a potential paradigm shift in how universities position themselves globally, Sorbonne University recently announced it will withdraw from the Times Higher Education (THE) World University Rankings starting in 2026. This decision isn’t an isolated act of defiance—Utrecht University had already left THE in 2023, and the Coalition for Advancing Research Assessment (CoARA), founded in 2022, has grown to 767 members by September 2025. Together, these milestones reflect a growing international movement that questions the very foundations of how we evaluate academic excellence.
The Sorbonne Statement: Quality Over Competition
Sorbonne’s withdrawal from THE rankings isn’t merely about rejecting a single ranking system. It appears to be a philosophical statement about what universities should stand for in the 21st century. The institution has made it clear that it refuses to be defined by its position in what it sees as commercial ranking matrices that reduce complex academic institutions to simple numerical scores.
Understanding CoARA: The Quiet Revolution
The Coalition for Advancing Research Assessment represents one of the most significant challenges to traditional academic evaluation methods in decades. Established in 2022, CoARA has grown rapidly to include 767 member organizations as of September 2025. This isn’t just a European phenomenon—though European institutions have been early and enthusiastic adopters. The geographic distribution of CoARA members tells a compelling story about where resistance to traditional ranking systems is concentrated. As the chart shows, European countries dominate participation, led by Spain and Italy, with strong engagement also from Poland, France, and several Nordic countries. This European dominance isn’t accidental—the region’s research ecosystem has long been concerned about the Anglo-American dominance of global university rankings and the way these systems can distort institutional priorities.
The Four Pillars of Reform
CoARA’s approach centers on four key commitments that directly challenge the status quo:
1. Abandoning Inappropriate Metrics The agreement explicitly calls for abandoning “inappropriate uses of journal- and publication-based metrics, in particular inappropriate uses of Journal Impact Factor (JIF) and h-index.” This represents a direct assault on the quantitative measures that have dominated academic assessment for decades.
2. Avoiding Institutional Rankings Perhaps most relevant to the Sorbonne’s decision, CoARA commits signatories to “avoid the use of rankings of research organisations in research assessment.” This doesn’t explicitly require withdrawal from ranking systems, but it does commit institutions to not using these rankings in their own evaluation processes.
3. Emphasizing Qualitative Assessment The coalition promotes qualitative assessment methods, including peer review and expert judgment, over purely quantitative metrics. This represents a return to more traditional forms of academic evaluation, albeit updated for modern needs.
4. Responsible Use of Indicators Rather than eliminating all quantitative measures, CoARA advocates for the responsible use of indicators that truly reflect research quality and impact, rather than simply output volume or citation counts.
European Leadership
Top 10 Countries by CoARA Membership:
The geographic distribution of CoARA members tells a compelling story about where resistance to traditional ranking systems is concentrated. As the chart shows, European countries dominate participation, led by Spain and Italy, with strong engagement also from Poland, France, and several Nordic countries. This European dominance isn’t accidental—the region’s research ecosystem has long been concerned about the Anglo-American dominance of global university rankings and the way these systems can distort institutional priorities.
The geographic distribution of CoARA members tells a compelling story about where
Prestigious European universities like ETH Zurich, the University of Zurich, Politecnico di Milano, and the University of Manchester are among the members, lending credibility to the movement. However, the data reveals that the majority of CoARA members (84.4%) are not ranked in major global systems like QS, which adds weight to critics’ arguments about institutional motivations.
CoARA Members Ranked vs Not Ranked in QS:
The Regional Divide: Participation Patterns Across the Globe
What’s particularly striking about the CoARA movement is the relative absence of U.S. institutions. While European universities have flocked to join the coalition, American participation remains limited. This disparity reflects fundamental differences in how higher education systems operate across regions.
American Participation: The clearest data we have on institutional cooperation with ranking systems comes from the United States. Despite some opposition to rankings, 78.1% of the nearly 1,500 ranked institutions returned their statistical information to U.S. News in 2024, showing that the vast majority of American institutions remain committed to these systems. However, there have been some notable American defections. Columbia University is among the latest institutions to withdraw from U.S. News & World Report college rankings, joining a small but growing list of American institutions questioning these systems. Yet these remain exceptions rather than the rule.
European Engagement: While we don’t have equivalent participation rate statistics for European institutions, we can observe their engagement patterns differently. 688 universities appear in the QS Europe ranking for 2024, and 162 institutions from Northern Europe alone appear in the QS World University Rankings: Europe 2025. However, European institutions have simultaneously embraced the CoARA movement in large numbers, suggesting a more complex relationship with ranking systems—continued participation alongside philosophical opposition.
Global Participation Challenges: For other regions, comprehensive participation data is harder to come by. The Arab region has 115 entries across five broad areas of study in QS rankings, but these numbers reflect institutional inclusion rather than active cooperation rates. It’s important to note that some ranking systems use publicly available data regardless of whether institutions actively participate or cooperate with the ranking organizations.
This data limitation itself is significant—the fact that we have detailed participation statistics for American institutions but not for other regions may reflect the more formalized and transparent nature of ranking participation in the U.S. system versus other global regions.
American universities, particularly those in the top tiers, have largely benefited from existing ranking systems. The global prestige and financial advantages that come with high rankings create powerful incentives to maintain the status quo. For many American institutions, rankings aren’t just about prestige—they’re about attracting international students, faculty, and research partnerships that are crucial to their business models.
Beyond Sorbonne: Other Institutional Departures
Sorbonne isn’t alone in taking action. Utrecht University withdrew from THE rankings earlier, citing concerns about the emphasis on scoring and competition. These moves suggest that some institutions are willing to sacrifice prestige benefits to align with their values. Interestingly, the Sorbonne has embraced alternative ranking systems such as the Leiden Open Rankings, which highlight its impact.
The Skeptics’ View: Sour Grapes or Principled Stand?
Not everyone sees moves like Sorbonne’s withdrawal as a noble principle. Critics argue that institutions often raise philosophical objections only after slipping in the rankings. As one university administrator put it: “If the Sorbonne were doing well in the rankings, they wouldn’t want to leave. We all know why self-assessment is preferred. ‘Stop the world, we want to get off’ is petulance, not policy.”
This critique resonates because many CoARA members are not major players in global rankings, which fuels suspicion that reform may be as much about strategic positioning as about values. For skeptics, the call for qualitative peer review and expert judgment risks becoming little more than institutions grading themselves or turning to sympathetic peers.
The Stakes: Prestige vs. Principle
At the heart of this debate is a fundamental tension: Should universities prioritize visibility and prestige in global markets, or focus on measures of excellence that reflect their mission and impact? For institutions like the Sorbonne, stepping away from THE rankings is a bet that long-term reputation will rest more on substance than on league table positions. But in a globalized higher education market, the risk is real—rankings remain influential signals to students, faculty, and research partners. Rankings also exert practical influence in ways that reformers cannot ignore. Governments frequently use global league tables as benchmarks for research funding allocations or as part of national excellence initiatives. International students, particularly those traveling across continents, often rely on rankings to identify credible destinations, and faculty recruitment decisions are shaped by institutional prestige. In short, rankings remain a form of currency in the global higher education market.
This is why the decision to step away from them carries risk. Institutions like the Sorbonne and Utrecht may gain credibility among reform-minded peers, but they could also face disadvantages in attracting international talent or demonstrating competitiveness to funders. Whether the gamble pays off will depend on whether alternative measures like CoARA or ROI rankings achieve sufficient recognition to guide these critical decisions.
The Future of Academic Assessment
The CoARA movement and actions like Sorbonne’s withdrawal represent more than dissatisfaction with current ranking systems—they highlight deeper questions about what higher education values in the 21st century. If the movement gains further momentum, it could push institutions and regulators to diversify evaluation methods, emphasize collaboration over competition, and give greater weight to societal impact.
Yet rankings are unlikely to disappear. For students, employers, and funders, they remain a convenient—if imperfect—way to compare institutions across borders. The practical reality is that rankings will continue to coexist with newer approaches, even as reform efforts reshape how universities evaluate themselves internally.
Alternative Rankings: The Rise of Outcome-Based Assessment
While CoARA challenges traditional rankings, a parallel trend focuses on outcome-based measures such as return on investment (ROI) and career impact. Georgetown University’s Center on Education and the Workforce, for example, ranks more than 4,000 colleges on the long-term earnings of their graduates. Its findings tell a very different story than research-heavy rankings—Harvey Mudd College, which rarely appears at the top of global research lists, leads ROI tables with graduates projected to earn $4.5 million over 40 years.
Other outcome-oriented systems, such as The Princeton Review’s “Best Value” rankings, emphasize affordability, employment, and post-graduation success. These approaches highlight institutions that may be overlooked by global research rankings but deliver strong results for students. Together, they represent a pragmatic counterbalance to CoARA’s reform agenda, showing that students and employers increasingly want measures of institutional value beyond research metrics alone.
These alternative models can be seen most vividly in rankings that emphasize affordability and career outcomes. *The Princeton Review’s* “Best Value” rankings, for example, combine measures of financial aid, academic rigor, and post-graduation outcomes to highlight institutions that deliver strong returns for students relative to their costs. Public universities often rise in these rankings, as do specialized colleges that may not feature prominently in global research tables.
Institutions like the Albany College of Pharmacy and Health Sciences illustrate this point. Although virtually invisible in global rankings, Albany graduates report median salaries of $124,700 just ten years after graduation, placing the college among the best in the nation on ROI measures. For students and families making education decisions, data like this often carries more weight than a university’s position in QS or THE.
Together with Georgetown’s ROI rankings and the example of Harvey Mudd College, these cases suggest that outcome-based rankings are not marginal alternatives—they are becoming essential tools for understanding institutional value in ways that matter directly to students and employers.
Rankings as Necessary Evil: The Practical Reality
The CoARA movement and actions like Sorbonne’s withdrawal represent more than just dissatisfaction with current ranking systems. They reflect deeper questions about the values and purposes of higher education in the 21st century.
If the movement gains momentum, we could see:
Diversification of evaluation methods, with different regions and institution types developing assessment approaches that align with their specific values and goals
Reduced emphasis on competition between institutions in favor of collaboration and shared improvement
Greater focus on societal impact rather than purely academic metrics
More transparent and open assessment processes that allow for a better understanding of institutional strengths and contributions
Conclusion: Evolution, Not Revolution
The Coalition for Advancing Research Assessment and decisions like Sorbonne’s withdrawal from THE rankings represent important challenges to how we evaluate universities, but they signal evolution rather than revolution. Instead of the end of rankings, we are witnessing their diversification. ROI-based rankings, outcome-focused measures, and reform initiatives like CoARA now coexist alongside traditional global league tables, each serving different audiences.
Skeptics may dismiss reform as “sour grapes,” yet the concerns CoARA raises about distorted incentives and narrow metrics are legitimate. At the same time, American resistance reflects both philosophical differences and the pragmatic advantages U.S. institutions enjoy under current systems.
The most likely future is a pluralistic landscape: research universities adopting CoARA principles internally while maintaining a presence in global rankings for visibility; career-focused institutions highlighting ROI and student outcomes; and students, faculty, and employers learning to navigate multiple sources of information rather than relying on a single hierarchy.
In an era when universities must demonstrate their value to society, conversations about how we measure excellence are timely and necessary. Whether change comes gradually or accelerates, the one-size-fits-all approach is fading. A more complex mix of measures is emerging—and that may ultimately serve students, institutions, and society better than the systems we are leaving behind. In the end, what many once described to me as a “necessary evil” may persist—but in a more balanced landscape where rankings are just one measure among many, rather than the single obsession that has dominated higher education for so long.
Dean Hoke is Managing Partner of Edu Alliance Group, a higher education consultancy. He formerly served as President/CEO of the American Association of University Administrators (AAUA). Dean has worked with higher education institutions worldwide. With decades of experience in higher education leadership, consulting, and institutional strategy, he brings a wealth of knowledge on colleges’ challenges and opportunities. Dean is the Executive Producer and co-host for the podcast series Small College America.
Things have been bleak in higher education the last couple of years, and no doubt they will remain bleak for a while. But it recently became clear to me how we’ll know that we are turning the corner: it will be the moment when provincial governments start allowing significant rises in domestic tuition.
This became clear to me when I was having a discussion with a senior provincial official (in a province I shall not name) about tuition. I was arguing that with provincial budgets flat and declining international enrolment, domestic tuition needed to increase – and that there was plenty of room to do so given the affordability trends of the last couple of decades.
What affordability trends, you ask? I’m glad you asked. Affordability is a ratio where the cost of a good or service is the numerator and some measure of ability to pay is the denominator. So, let’s look at what it takes to pay average tuition and fees. Figure 1 shows average tuition as a percentage of the median income of couple families and lone-parent families aged 45-54. As you can see, for the average two-couple household, average tuition (which – recall last Wednesday’s blog – is an overestimate for most students) has never been more affordable in the twenty-first century. For lone-parent families, current levels of tuition are at a twenty-year low.
Figure 1: Average Undergraduate Tuition and Fees as a Percentage of Median Family Income, Couple Family and Lone-Parent Families aged 45-54, Canada, 2000-2024
Ah, you say, but that’s tuition as a function of parental ability-to-pay – what about students? Well, it’s basically the same story – calculated as a percentage of the average student wage, tuition has not been this cheap since the turn of the century, and in Ontario, it has dropped by 27% since 2017. And yes, the national story is to a large degree a function of what’s been going on in Ontario, but over the past decade or so, this ratio has been declining in all provinces except Manitoba, Saskatchewan and Alberta.
Figure 2: Number of Hours Worked at Median Hourly Income for Canadians Aged 15-24 Required to pay Average Undergraduate Tuition and Fees, Canada and Ontario, 1997-2024
And that’s before we even touch the issue of student aid, which as you all know is way up this century even after we take student population growth into account. In real dollars, we’ve gone from a $10B/year student aid system to a $20B/year system with the vast majority of growth coming on the non-repayable side, rather than from loans.
Figure 3: Total Student Financial Assistance by Type, Selected years, 1993-94 to 2023-2024, in Millions, in $2023
In fact, student aid expenditures are so high nowadays that across both universities and colleges we spend about $3 billion more in student aid than we take in from tuition fees. That’s NEGATIVE NET TUITION, PEOPLE.
Figure 4: Aggregate Non-Repayable Aid vs Aggregate Domestic Tuition fees, 2007-08 to 2023-24, in Billions, in $2023
So, yeah, affordability trends. They are much more favorable to students than most people think.
Anyway, the provincial official seemed a bit nonplussed by my reply: my sense is that they had never been briefed on the degree to which tuition increases have been thrown into reverse these past few years, and he certainly didn’t know about the huge increase in non-repayable aid over the past few decades. They didn’t push back on any of this evidence, BUT, they insisted, tuition fees weren’t going up because doing so ishard and it’sunpopular.
To which I responded: well, sure. But was raising tuition any easier or less unpopular in 1989 when the Quebec Liberal government more than doubled tuition? Than in the mid-90s when both the NDP and Conservative governments allowed tuition to rise? Than in 2001 when the BC Liberals allowed tuition to increase by 50%? This has been done before. There’s absolutely no reason it can’t be done again. The only thing it will take is the courage to put the requirements of institutions that actually build economies and societies ahead of the cheap, short-term sugar highs of chasing things like “affordability”.
Now, to be fair, I don’t for the moment see any provincial governments prepared to do this. If there is one thing that seems to unite provincial governments these days, it is an inability to make hard decisions. But this particular political moment won’t last forever. It might take a serious, long-term recession to knock it into various heads that no matter how much money we sink into them, natural resources and construction alone won’t run this economy. Eventually, we’re going to have to re-build the great college and university system we’re in the middle of trashing.
And we’ll know that moment has come when provincial governments agree that domestic tuition should rise again.
In a time when higher education grapples with systemic challenges—rising tuition, debt burdens, underfunding, and institutional inertia—the Next System Teach-Ins emerge as a powerful catalyst for critical dialogue, community engagement, and transformative thinking.
A Legacy of Teach-Ins: From Vietnam to System Change
Teach-ins have long functioned as dynamic forums that transcend mere lecturing, incorporating participatory dialogue and strategic action. The concept originated in March 1965 at the University of Michigan in direct protest of the Vietnam War; faculty and students stayed up all night, creating an intellectual and activist space that sparked over 100 similar events in that year alone.
This model evolved through the decades—fueling the environmental, civil rights, and anti-apartheid movements of the 1970s and 1980s, followed by the Democracy Teach-Ins of the 1990s which challenged corporate influence in universities and energized anti-sweatshop activism. Later waves during Occupy Wall Street and Black Lives Matter sustained teach-ins as a tool for inclusive dialogue and resistance.
The Next System Teach-Ins: Vision, Scope, and Impact
Vision and Purpose
Launched in Spring 2016, the Next System Teach-Ins aimed to broaden public awareness of systemic alternatives to capitalism—ranging from worker cooperatives and community land trusts to decentralized energy systems and democratic public banking.
These teach-ins were designed not just as academic discussion forums but as launching pads for community-led action, connecting participants with toolkits, facilitation guides, ready-made curricula, and resources to design their own events.
Highlights of the Inaugural Wave
In early 2016, notable teach-ins took place across the U.S.—from Madison and New York City to Seattle and beyond. Participants explored pressing questions such as, “What comes after capitalism?” and “How can communities co-design alternatives that are just, sustainable, and democratic?”
These gatherings showcased a blend of plenaries, interactive workshops, radio segments, and “wall-to-wall” organizing strategies—mobilizing participants beyond attendee numbers into collective engagement.
Resources and Capacity Building
Organizers were provided with a wealth of support materials including modular curriculum, templates for publicity and RFPs, event agendas, speaker lists, and online infrastructure to manage RSVPs and share media.
The goal was dual: ignite a nationwide conversation on alternative systemic models, and encourage each teach-in host to aim for a specific local outcome—whether that be a campus campaign, curriculum integration, or forming ongoing community groups.
2025: Renewed Momentum
The Next System initiative has evolved. According to a May 2025 update from George Mason University’s Next System Studies, a new wave of Next System Teach-Ins is scheduled for November 1–16, 2025.
This iteration amplifies the original mission: confronting interconnected social, ecological, political, and economic crises by gathering diverse communities—on campuses, in union halls, or public spaces—to rethink, redesign, and rebuild toward a more equitable and sustainable future.
Why This Matters for Higher Education (HEI’s Perspective)
Teach-ins revitalize civic engagement on campus by reasserting higher education’s role as an engine of critical thought and imagination.
They integrate scholarship and practice, uniting theory with actionable strategies—from economic democracy to ecological regeneration—and enrich academic purpose with real-world relevance.
They also mobilize institutional infrastructure, offering student-led exploration of systemic change without requiring prohibitive resources.
By linking the global and the local, teach-ins equip universities to address both planetary crises and campus-specific challenges.
Most importantly, they trigger systemic dialogue, pushing past complacency and fostering a new generation of system-thinking leaders.
Looking Ahead: Institutional Opportunities
Host a Teach-In – Whether a focused film screening, interdisciplinary workshop, or full-scale weekend event, universities can leverage Next System resources to design context-sensitive, action-oriented programs.
Embed in Curriculum – The modular material—especially case studies on democratic economics, energy justice, or communal models—can integrate into courses in sociology, environmental studies, governance, and beyond.
Forge Community Partnerships – By extending beyond campus (to community centers, labor unions, public libraries), teach-ins expand access and deepen impact.
Contribute to a National Movement – University participation in the November 2025 wave positions institutions as active contributors to a growing ecosystem of systemic transformation.
A Bold Experiment
The Next System Teach-Ins represent a bold experiment in higher education’s engagement with systemic change. Combining rich traditions of activism with pragmatic tools for contemporary challenges, these initiatives offer HEI a blueprint for meaningful civic education, collaborative inquiry, and institutional transformation.
As the 2025 wave approaches, universities have a timely opportunity to be centers of both reflection and action in building the next system we all need.
But identifying how and when to deliver that content has been a challenge, particularly given the varying perspectives different disciplines have on generative AI and when its use should be allowed. A June report from Tyton Partners found that 42 percent of students use generative AI tools at least weekly, and two-thirds of students use a singular generative AI tool like ChatGPT. A survey by Inside Higher Ed and Generation Lab found that 85 percent of students had used generative AI for coursework in the past year, most often for brainstorming or asking questions.
The University of Mary Washington developed an asynchronous one-credit course to give all students enrolled this fall a baseline foundation of AI knowledge. The optional class, which was offered over the summer at no cost to students, introduced them to AI ethics, tools, copyright concerns and potential career impacts.
The goal is to help students use the tools thoughtfully and intelligently, said Anand Rao, director of Mary Washington’s center for AI and the liberal arts. Initial results show most students learned something from the course, and they want more teaching on how AI applies to their majors and future careers.
How it works: The course, IDIS 300: Introduction to AI, was offered to any new or returning UMW student to be completed any time between June and August. Students who opted in were added to a digital classroom with eight modules, each containing a short video, assigned readings, a discussion board and a quiz assignment. The class was for credit, graded as pass-fail, but didn’t fulfill any general education requirements.
Course content ranged from how to use AI tools and prompt generative AI output to academic integrity, as well as professional development and how to critically evaluate AI responses.
“I thought those were all really important as a starting point, and that still just scratches the surface,” Rao said.
The course is not designed to make everyone an AI user, Rao said, “but I do want them to be able to speak thoughtfully and intelligently about the use of tools, the application of tools and when and how they make decisions in which they’ll be able to use those tools.”
At the end of the course, students submitted a short paper analyzing an AI tool used in their field or discipline—its output, use cases and ways the tool could be improved.
Rao developed most of the content, but he collaborated with campus stakeholders who could provide additional insight, such as the Honor Council, to lay out how AI use is articulated in the honor code.
The impact: In total, the first class enrolled 249 students from a variety of majors and disciplines, or about 6 percent of the university’s total undergrad population. A significant number of the course enrollees were incoming freshmen. Eighty-eight percent of students passed the course, and most had positive feedback on the class content and structure.
In postcourse surveys, 68 percent of participants indicated IDIS 300 should be a mandatory course or highly recommended for all students.
“If you know nothing about AI, then this course is a great place to start,” said one junior, noting that the content builds from the basics to direct career applications.
What’s next: Rao is exploring ways to scale the course in the future, including by developing intermediate or advanced classes or creating discipline-specific offerings. He’s also hoping to recruit additional instructors, because the course had some challenges given its large size, such as conducting meaningful exchanges on the discussion board.
The center will continue to host educational and discussion-based events throughout the year to continue critical conversations regarding generative AI. The first debate, centered on AI and the environment, aims to evaluate whether AI’s impact will be a net positive or negative over the next decade, Rao said.
The university is also considering ways to engage the wider campus community and those outside the institution with basic AI knowledge. IDIS 300 content will be made available to nonstudents this year as a Canvas page. Some teachers in the local school district said they’d like to teach the class as a dual-enrollment course in the future.
Get more content like this directly to your inbox. Subscribe here.
One of the things that makes interviews stressful is their unpredictability, which is unfortunately also what makes them so hard to prepare for. In particular, it’s impossible to predict exactly what questions you will be asked. So, how do you get ready?
Scripting out answers for every possible question is a popular strategy but a losing battle. There are too many (maybe infinite?) possible questions and simply not enough time. In the end, you’ll spend all your time writing and you still won’t have answers to most of the questions you might face. And while it might make you feel briefly more confident, that confidence is unlikely to survive the stress and distress of the actual interview. You’ll be rigid rather than flexible, robotic rather than responsive.
This article outlines an interview-preparation strategy that is both easier and more effective than frantic answer scripting, one that will leave you able to answer just about any interview question smoothly.
Step 1: Themes
While you can’t know what questions you will get, you can pretty easily predict many of the topics your interviewers will be curious about. You can be pretty sure that an interviewer will be interested in talking about collaboration, for example, even if you can’t say for sure whether they’ll ask a standard question like “Tell us about a time when you worked with a team to achieve a goal” or something weirder like “What role do you usually play on a team?”
Your first step is to figure out the themes that their questions are most likely to touch on. Luckily, I can offer you a starter pack. Here are five topics that are likely to show up in an interview for just about any job, so it pays to prepare for them no matter what:
Communication
Collaboration (including conflict!)
Time and project management
Problem-solving and creativity
Failures and setbacks
But you also need to identify themes that are specific to the job or field you are interviewing for. For a research and development scientist position, for example, an interviewer might also be interested in innovation and scientific thinking. For a project or product manager position, they’ll probably want to know about stakeholder management. And so on.
To identify these specific themes, check the job ad. They may have already identified themes for you by categorizing the responsibilities or qualifications, or you can just look for patterns. What topics/ideas/words come up most often in the ad that aren’t already represented in the starter pack? What kinds of skills or experience are they expecting? If you get stuck, try throwing the ad into a word cloud generator and see what it spits out.
Ideally, try to end this step with at least three new themes, in addition to the starter pack.
Step 2: Stories
The strongest interview answers are anchored by a specific story from your experience, which provides a tangible demonstration about how you think and what you can do. But it’s incredibly difficult to come up with a good, relevant example in the heat of an interview, let alone to tell it effectively in a short amount of time. For that, you need some preparation and some practice.
So for each of your themes, identify two to three relevant stories. Stories can be big (a whole project from beginning to end), or they can be small (a single interaction with a student). They can be hugely consequential (a decision you made that changed the course of your career), or they can be minor but meaningful (a small disagreement you handled well). What is most important is that the stories demonstrate your skills, experiences and attitudes clearly and compellingly.
The point is to have a lot of material to work with, so aim for at least 10 stories total, and preferably more. The same story can apply to multiple themes, but try not to let double-dipping limit the number of stories you end up with.
Then, for each of your stories, write an outline that gives just enough context to understand the situation, describes your actions and thinking, and says what happened at the end. Use the STAR method, if it’s useful for keeping your stories tight and focused. Shaping your stories and deciding what to say (and not say) will help your audience see your skills in action with minimal distractions. This is one of the most important parts of your prep, so take your time with it.
Step 3: Approaches
As important as stories are in interviewing, you usually can’t just respond to a question with a story without any framing or explanation. So you’ll want to develop language to describe some of your usual strategies, orientations or approaches to situations that fall into each of the themes. That language will help you easily link each question to one of your stories.
So for each theme, do a little brainstorming to identify your core messaging: “What do I usually do when faced with a situation related to [THEME]?” Then write a few bullet points. (You can also reverse engineer this from the stories: Read the stories linked to a particular theme, then look for patterns in your thinking or behavior.)
These bullet points give you what you need to form connective tissue between the specific question they ask and the story you want to tell. So if they ask, “Tell me about a time when you worked with a team to achieve a goal,” you can respond with a story and close out by describing how that illustrates a particular approach. Or if they ask, “What role do you usually play on a team?” you can start by describing how you think about collaboration and your role in it and then tell a story that illustrates that approach.
Though we are focusing on thematic questions here, make sure to also prepare bullet points for some of the most common general interview questions, like “Why do you want this job?” and “Tell us about yourself.”
Step 4: Bring It All Together
You really, really, really need to practice out loud before your interview. Over the years, I’ve found that many of the graduate students and postdocs I work with spend a lot of time thinking about how they might answer questions and not nearly enough time actually trying to answer them. And so they miss the opportunity to develop the kind of fluency and flexibility that helps one navigate the unpredictable environment of an interview.
Here’s how to use the prep you did in Steps 1-3 to practice:
First, practice telling each of your 10-plus stories out loud, at least three times each. The goal here is to develop fluency in your storytelling, so you can keep things focused and flowing without needing to think about it.
Second, for each of the bullet points you created in Step 3, practice explaining it (out loud!) a few times, ideally in a couple of different ways.
Third, practice bringing it all together by answering some actual interview questions. Find a long list of interview questions (like this one), then pick questions at random to answer. The randomness is important, because the goal is to practice making smooth and effective connections between questions, stories and approaches. You need to figure out what to do when you run into a question that is challenging, unexpected or just confusing.
And once you’ve done that, do it all again.
In the end, you’ve created a set of building blocks that you can arrange and rearrange as needed in the moment. And it’s a set you can keep adding to with more stories and more themes, keep practicing with new questions, and keep adapting for your next interview.
Derek Attig is assistant dean for career and professional development in the Graduate College of the University of Illinois at Urbana-Champaign. Derek is a member of the Graduate Career Consortium, an organization providing an international voice for graduate-level career and professional development leaders.
Earlier this month, College Board announced its decision to kill Landscape, a race-neutral tool that allowed admissions readers to better understand a student’s context for opportunity. After an awkward 2019 rollout as the “Adversity Score,” Landscape gradually gained traction in many selective admissions offices. Among other items, the dashboard provided information on the applicant’s high school, including the economic makeup of their high school class, participation trends for Advanced Placement courses and the school’s percentile SAT scores, as well as information about the local community.
Landscape was one of the more extensivelystudied interventions in the world of college admissions, reflecting how providing more information about an applicant’s circumstances can boost the likelihood of a low-income student being admitted. Admissions officers lack high-quality, detailed information on the high school environment for an estimated 25 percent of applicants, a trend that disproportionately disadvantages low-income students. Landscape helped fill that critical gap.
While not every admissions office used it, Landscape was fairly popular within pockets of the admissions community, as it provided a more standardized, consistent way for admissions readers to understand an applicant’s environment. So why did College Board decide to ax it? In its statement on the decision, College Board noted that “federal and state policy continues to evolve around how institutions use demographic and geographic information in admissions.” The statement seems to be referring to the Trump administration’s nonbinding guidance that institutions should not use geographic targeting as a proxy for race in admissions.
If College Board was worried that somehow people were using the tool as a proxy for race (and they weren’t), well, it wasn’t a very good one. In the most comprehensive study of Landscape being used on the ground, researchers found that it didn’t do anything to increase racial/ethnic diversity in admissions. Things are different when it comes to economic diversity. Use of Landscape is linked with a boost in the likelihood of admission for low-income students. As such, it was a helpful tool given the continued underrepresentation of low-income students at selective institutions.
Still, no study to date found that Landscape had any effect on racial/ethnic diversity. The findings are unsurprising. After all, Landscape was, to quote College Board, “intentionally developed without the use or consideration of data on race or ethnicity.” If you look at the laundry list of items included in Landscape, absent are items like the racial/ethnic demographics of the high school, neighborhood or community.
While race and class are correlated, they certainly aren’t interchangeable. Admissions officers weren’t using Landscape as a proxy for race; they were using it to compare a student’s SAT score or AP course load to those of their high school classmates. Ivy League institutions that have gone back to requiring SAT/ACT scores have stressed the importance of evaluating test scores in the student’s high school context. Eliminating Landscape makes it harder to do so.
An important consideration: Even if using Landscape were linked with increased racial/ethnic diversity, its usage would not violate the law. The Supreme Court recently declined to hear the case Coalition for TJ v. Fairfax County School Board. In declining to hear the case, the court has likely issued a tacit blessing on race-neutral methods to advance diversity in admissions. The decision leaves the Fourth Circuit opinion, which affirmed the race-neutral admissions policy used to boost diversity at Thomas Jefferson High School for Science and Technology, intact.
The court also recognized the validity of race-neutral methods to pursue diversity in the 1989 case J.A. Croson v. City of Richmond. In a concurring opinion filed in Students for Fair Admission (SFFA) v. Harvard, Justice Brett Kavanaugh quoted Justice Antonin Scalia’s words from Croson: “And governments and universities still ‘can, of course, act to undo the effects of past discrimination in many permissible ways that do not involve classification by race.’”
College Board’s decision to ditch Landscape sends an incredibly problematic message: that tools to pursue diversity, even economic diversity, aren’t worth defending due to the fear of litigation. If a giant like College Board won’t stand behind its own perfectly legal effort to support diversity, what kind of message does that send? Regardless, colleges and universities need to remember their commitments to diversity, both racial and economic. Yes, post-SFFA, race-conscious admissions has been considerably restricted. Still, despite the bluster of the Trump administration, most tools commonly used to expand access remain legal.
The decision to kill Landscape is incredibly disappointing, both pragmatically and symbolically. It’s a loss for efforts to broaden economic diversity at elite institutions, yet another casualty in the Trump administration’s assault on diversity. Even if the College Board has decided to abandon Landscape, institutions must not forget their obligations to make higher education more accessible to low-income students of all races and ethnicities.
In this and a subsequent blog post, I want to complement these works with some practice-informed reflections from my work with many senior higher education leaders. I also aim to open a debate about optimising the selection and support for new Vice Chancellors by challenging some current practices.
Reflections to consider when recruiting Vice Chancellors
Adopt a different team-based approach
Clearly, all appointment processes are team-based – undertaken by a selection committee. For this type of appointment, however, we need a different approach which takes collective responsibility as a ‘Selection and Transition Team’. What’s the difference? In this second approach, the team take a wider remit with responsibility for the full life cycle of the process from search to selection to handover and transition into role. The team also oversee any interim arrangements if a gap in time exists between the existing leader leaving and the successor arriving. This is often overlooked.
Pre-search diagnosis (whether involving a search and selection firm or not) is often underestimated in its importance or is under-resourced. Before you start to search for a candidate to lead a university, you need to ensure those involved are all ‘on the same page’. Sometimes they are, but in other cases they fail to recognise that they are on the same, but wrong, page. Classically, this may be to find someone to lead the organisation of today, and a failure to consider the place they seek to be in 10 years. Before appointing a search firm, part of the solution is to ensure you have a shared understanding of the type of universityyou are seeking someone to lead.
Role balance and capabilities
A further diagnostic issue, linked to the former point, is to be very clear about the balance of capabilities required in your selected candidate. One way of framing this is to assess the candidate balance across a number of dimensions, including:
The Chief Academic Officer (CAO) capabilities; more operational and internally focussed.
The Chief Executive Officer (CEO) capabilities; more strategic and initially internally focussed.
The Chief Civic Officer (CCO) capabilities: more strategic and externally focussed; and
The Chief Stakeholder Relationship Officer (CSRO): more operational and externally focussed.
All four matter. One astute Vice Chancellor suggested to me a fifth; Chief Storytelling Officer (CSO).
Search firm or not?
The decision as to whether to use a search firm is rarely considered today – it is assumed you will use one. It is, however, worth pausing to reflect on this issue, if only to be very clear about what you are seeking from a search firm. What criteria should you use to select one? Are you going with one who you already use, or have used, or are you open to new players (both to you and to the higher education market)? The latter might be relevant if you are seeking to extend your search to candidates who have a career trajectory beyond higher education.
‘Listing’ – how and by whom?
Searching should lead to many potential candidates Selecting who to consider is typically undertaken through a long-listing process and from this a short-list is created. Make sure you understand how this will be undertaken and who will be doing it. When was the last time you asked to review the larger list from which the long list was taken?
Psychometrics – why, which and how?
A related matter involves the use of any psychometric instruments proposed to form part of the selection process. They are often included –yet the rationale for this is often unclear. As is the question of how the data will be used. Equally importantly, if the judgment is that it should be included, who should undertake the process? Whichever route you take, you would be wise to read Andrew Munro’s recent book on the topic, Personality Testing In Employee Selection: Challenges, Controversies and Future Directions
Balance questions with scenarios and dilemmas
Given the complexity of the role of the Vice Chancellor, it is clearly important to assess candidates across a wide range of criteria. Whilst a question-and-answer process can elicit some evidence, we should all be aware of the limitations of such a process. Complementing the former with a well-considered scenario-based processes involving a series of dilemmas, which candidates are invited to consider, is less common than it should be.
Rehearse final decision scenarios
If you are fortunate as a selection panel, after having considered many different sources of evidence, you will reach a collective, unanimous decision about the candidate you wish to offer the position. Job almost done. More likely, however, you will have more than one preferred candidate – each providing evidence to be appointable albeit with evidence of gaps in some areas. Occasionally, you may also have reached an impasse where strong cases are made to appoint two equally appointable candidates. Preparing for these situations by considering them in advance. In some cases, the first time such situations are considered are during the final stage of the selection exercise.
In part 2 I’ll focus more on support and how to ensure the leadership transition is given as much attention as candidate selection.
The rapid adoption and development of AI has rocked higher education and thrown into doubt many students’ career plans and as many professors’ lesson plans. The best and only response is for students to develop capabilities that can never be authentically replicated by AI because they are uniquely human. Only humans have flesh and blood bodies. And these bodies are implicated in a wide range of Uniquely Human Capacities (UHCs), such as intuition, ethics, compassion, and storytelling. Students and educators should reallocate time and resources from AI-replaceable technical skills like coding and calculating to developing UHCs and AI skills.
Adoption of AI by employers is increasing while expectations for AI-savvy job candidates are rising. College students are getting nervous. 51% are second guessing their career choice and 39% worry that their job could be replaced by AI, according to Cengage Group’s 2024 Graduate Employability Report. Recently, I heard a student at an on-campus Literacy AI event ask an OpenAI representative if she should drop her efforts to be a web designer. (The representative’s response: spend less time learning the nuts and bolts of coding, and more time learning how to interpret and translate client goals into design plans.)
At the same time, AI capabilities are improving quickly. Recent frontier models have added “deep research” (web search and retrieval) and “reasoning” (multi-step thinking) capabilities. Both produce better, more comprehensive, accurate and thoughtful results, performing broader searches and developing responses step-by-step. Leading models are beginning to offer agentic features, which can do work for us, such as coding, independently. American AI companies are investing hundreds of billions in a race to develop Artificial General Intelligence (AGI). This is a poorly defined state of the technology where AI can perform at least as well as humans in virtually any economically valuable cognitive task. It can act autonomously, learn, plan, and adapt, and interact with the world in a general flexible way, much as humans do. Some experts suggest we may reach this point by 2030, although others have a longer timeline.
Hard skills that may be among the first to be replaced are those that AI can do better, cheaper, and faster. As a general-purpose tool, AI can already perform basic coding, data analysis, administrative, routine bookkeeping and accounting, and illustration tasks that previously required specialized tools and experience. I have my own mind-blowing “vibe-coding” experience, creating custom apps with limited syntactical coding understanding. AIs are capable of quantitative, statistical, and textual analysis that might have required Excel or R in the past. According to Deloitte, AI initiatives are touching virtually every aspect of a companies’ business, affecting IT, operations, marketing the most. AI can create presentations driven by natural language that make manual PowerPoint drafting skills less essential.
Humans’ Future-Proof Strategy
How should students, faculty and staff respond to the breathtaking pace of change and profound uncertainties about the future of labor markets? The OpenAI representative was right: reallocation of time and resources from easily automatable skills to those that only humans with bodies can do. Let us spend less time teaching and learning skills that are likely to be automated soon.
Technical Skills OUT
Uniquely Human Capacities IN
Basic coding
Mindfulness, empathy, and compassion
Data entry and bookkeeping
Ethical judgment, meaning making, and critical thinking
Mastery of single-purpose software (e.g., PowerPoint, Excel, accounting apps)
Authentic and ethical use of generative and other kinds of AI to augment UHCs
Instead, students (and everyone) should focus on developing Uniquely Human Capacities (UHCs). These are abilities that only humans can authentically perform because they need a human body. For example, intuition is our inarticulable and immediate knowledge that we know somatically, in our gut. It is how we empathize, show compassion, evaluate morality, listen and speak, love, appreciate and create beauty, play, collaborate, tell stories, find inspiration and insight, engage our curiosity, and emote. It is how we engage with the deep questions of life and ask the really important questions.
According to Gholdy Muhammad in Unearthing Joy, a reduced emphasis on skills can improve equity by creating space to focus on students’ individual needs. She argues that standards and pedagogies need to also reflect “identity, intellectualism, criticality, and joy.” These four dimensions help “contextualize skills and give students ways to connect them to the real world and their lives.”
The National Association of Colleges and Employers has created a list of eight career readiness competencies that employers say are necessary for career success. Take a look at the list below and you will see that seven of the eight are UHCs. The eighth, technology, underlines the need for students and their educators to understand and use AI effectively and authentically.
For example, an entry-level finance employee who has developed their UHCs will be able to nimbly respond to changing market conditions, interpret the intentions of managers and clients, and translate these into effective analysis and creative solutions. They will use AI tools to augment their work, adding greater value with less training and oversight.
Widen Humans’ Comparative Advantage
As demonstrated in the example above, our UHCs are humans’ unfair advantage over AI. How do we develop them, ensuring the employability and self-actualization of students and all humans?
The foundation is mindfulness. Mindfulness is about being fully present with ourselves and others, and accepting, primarily via bodily sensations, without judgment and preference. It allows us to accurately perceive reality, including our natural intuitive connection with other humans, a connection AI cannot share. Mindfulness can be developed during and beyond meditation, moments of stillness devoted to mindfulness. Mindfulness practice has been shown to improve self-knowledge, set career goals, and improve creativity.
Mindfulness supports intuitive thinking and metacognition, our ability to think clearly about thinking. Non-conceptual thinking, using our whole bodies, entails developing our intuition and a growth mindset. The latter is about recognizing that we are all works in progress, where learning is the product of careful risk-taking, learning from errors, supported by other humans.
These practices support deep, honest, authentic engagement with other humans of all types. (These are not available over social media.) For students, this is about engaging with each other in class, study groups, clubs, and elsewhere on campus, as well as engaging with faculty in class and office hours. Such engagement with humans can feel unfamiliar and awkward as we emerge from a pandemic. However, these interactions are a critical way to practice and improve our UHCs.
Literature and cinema are ways to engage with and develop empathy and understanding of humans you do not know, may not even be alive or even exist at all. Fiction is maybe the only way to experience in the first person what a stranger is thinking and feeling.
Indeed, every interaction with the world is an opportunity to practice those Uniquely Human Capacities (UHCs):
Use your imagination and creativity to solve a math problem.
Format your spreadsheet or presentation or essay so that it is beautiful.
Get in touch with the feelings that arise when faced with a challenging task.
Many students tell me they are in college to better support and care for family. As you do the work, let yourself experience as an act of love for them.
AI Can Help Us Be Better Humans
AI usage can dull our UHCs or sharpen them. Use AI to challenge us to improve our work, not to provide short cuts that make our work average, boring, or worse. Ethan Mollick (2024) describes the familiar roles AIs can profitably play in our lives. Chief among these is as a patient, always available, if sometimes unreliable tutor. A tutor will give us helpful and critical feedback and hints but never the answers. A tutor will not do our work for us. A tutor will suggest alternative strategies and we can instruct them to nudge us to check on our emotions, physical sensations and moral dimensions of our work. When we prompt AI for help, we should explicitly give it the role of a tutor or editor (as I did with Claude for this article).
How do we assess whether we and our students are developing their UHCs? We can develop personal and work portfolios that tell the stories of connections, insights, and benefits to society we have made. We can get honest testimonials of trusted human partners and engage in critical yet self-compassionate introspection, and journalling. Deliberate practice with feedback in real life and role-playing scenarios can all be valuable. One thing that will not work as well: traditional grades and quantitative measures. After all, humanity cannot be measured.
In a future where AI or AGI assumes the more rote and mechanical aspects of work, we humans are freed to build their UHCs, to become more fully human. An optimistic scenario!
What Could Go Wrong?
The huge, profit-seeking transnational corporations that control AI may soon feel greater pressure to show a return on enormous investment to investors. This could cause costs for users to go up, widening the capabilities gap between those with means and the rest. It could also result in Balkanized AI, where each model is embedded with political, social, and other biases that appeal to different demographics. We see this beginning with Claude, prioritizing safety, and Grok, built to emphasize free expression.
In addition, AI could get good enough at faking empathy, morality, intuition, sense making, and other UHCs. In a competitive, winner-take-all economy with even less government regulation and leakier safety net, companies may aggressively reduce hiring at entry level and of (expensive) high performers. Many of the job functions of the former can be most easily replaced by AI. Mid-level professionals can use AI to perform at a higher level.
Finally, and this is not an exhaustive list: Students and all of us may succumb to the temptation of using AI short cut their work, slowing or reversing development of critical thinking, analytical skills, and subject matter expertise. The tech industry has perfected, over twenty years, the science of making our devices virtually impossible to put down, so that we are “hooked.”
Keeping Humans First
The best way to reduce the risks posed by AI-driven change is to develop our students’ Uniquely Human Capacities while actively engaging policymakers and administrators to ensure a just transition. This enhances the unique value of flesh-and-blood humans in the workforce and society. Educators across disciplines should identify lower value-added activities vulnerable to automation and reorient curricula toward nurturing UHCs. This will foster not only employability but also personal growth, meaningful connection, and equity.
Even in the most challenging scenarios, we are unlikely to regret investing in our humanity. Beyond being well-employed, what could be more rewarding than becoming more fully actualized, compassionate, and connected beings? By developing our intuitions, morality, and bonds with others and the natural world, we open lifelong pathways to growth, fulfillment, and purpose. In doing so, we build lives and communities resilient to change, rich in meaning, and true to what it means to be human.
The article represents my opinions only, not necessarily those of the Borough of Manhattan Community College or CUNY.
Brett Whysel is a lecturer in finance and decision-making at the Borough of Manhattan Community College, CUNY, where he integrates mindfulness, behavioral science, generative AI, and career readiness into his teaching. He has written for Faculty Focus, Forbes, and The Decision Lab. He is also the co-founder of Decision Fish LLC, where he develops tools to support financial wellness and housing counselors. He regularly presents on mindfulness and metacognition in the classroom and is the author of the Effortless Mindfulness Toolkit, an open resource for educators published on CUNY Academic Works. Prior to teaching, he spent nearly 30 years in investment banking. He holds an M.A. in Philosophy from Columbia University and a B.S. in Managerial Economics and French from Carnegie Mellon University.
The Teaching Excellence Framework has always had multiple aims.
It was partly intended to rebalance institutional focus from research towards teaching and student experience. Jo Johnson, the minister who implemented it, saw it as a means of increasing undergraduate teaching resources in line with inflation.
Dame Shirley Pearce prioritised enhancing quality in her excellent review of TEF implementation. And there have been other purposes of the TEF: a device to support regulatory interventions where quality fell below required thresholds, and as a resource for student choice.
And none of this should ignore its enthusiastic adoption by student recruitment teams as a marketing tool.
As former Chair and Deputy Chair of the TEF, we are perhaps more aware than most of these competing purposes, and more experienced in understanding how regulators, institutions and assessors have navigated the complexity of TEF implementation. The TEF has had its critics – something else we are keenly aware of – but it has had a marked impact.
Its benchmarked indicator sets have driven a data-informed and strategic approach to institutional improvement. Its concern with disparities for underrepresented groups has raised the profile of equity in institutional education strategies. Its whole institution sweep has made institutions alert to the consequences of poorly targeted education strategies and prioritised improvement goals. Now, the publication of the OfS’s consultation paper on the future of the TEF is an opportunity to reflect on how the TEF is changing and what it means for the regulatory and quality framework in England.
A shift in purpose
The consultation proposes that the TEF becomes part of what the OfS sees as a more integrated quality system. All registered providers will face TEF assessments, with no exemptions for small providers. Given the number of new providers seeking OfS registration, it is likely that the number to be assessed will be considerably larger than the 227 institutions in the 2023 TEF.
Partly because of the larger number of assessments to be undertaken, TEF will move to a rolling cycle, with a pool of assessors. Institutions will still be awarded three grades – one for outcomes, one for experience and one overall, but their overall grade will simply be the lower of the two other grades. The real impact of this will be on Bronze-rated providers who could find themselves subject to a range of measures, potentially including student number controls or fee constraints, until they show improvement.
The OfS consultation paper marks a significant shift in the purpose of the TEF, from quality enhancement to regulation and from improvement to compliance. The most significant changes are at the lower end of assessed performance. The consultation paper makes sensible changes to aspects of the TEF which always posed challenges for assessors and regulators, tidying up the relationship between the threshold B3 standards and the lowest TEF grades. It correctly separates measures of institutional performance on continuation and completion – over which institutions have more direct influence – from progression to employment – over which institutions have less influence.
Pressure points
But it does this at some heavy costs. By treating the Bronze grade as a measure of performance at, rather than above, threshold quality, it will produce just two grades above the threshold. In shifting the focus towards quantitative indicators and away from institutional discussion of context, it will make TEF life more difficult for further education institutions and institutions in locations with challenging graduate labour markets. The replacement of the student submission with student focus groups may allow more depth on some issues, but comes at the expense of breadth, and the student voice is, disappointingly, weakened.
There are further losses as the regulatory purpose is embedded. The most significant is the move away from educational gain, and this is a real loss: following TEF 2023, almost all institutions were developing their approaches to and evaluation of educational gain, and we have seen many examples where this was shaping fruitful approaches to articulating institutional goals and the way they shape educational provision.
Educational gain is an area in which institutions were increasingly thinking about distinctiveness and how it informs student experience. It is a real loss to see it go, and it will weaken the power of many education strategies. It is almost certainly the case that the ideas of educational gain and distinctiveness are going to be required for confident performance at the highest levels of achievement, but it is a real pity that it is less explicit. Educational gain can drive distinctiveness, and distinctiveness can drive quality.
Two sorts of institutions will face the most significant challenges. The first, obviously, are providers rated Bronze in 2023, or Silver-rated providers whose indicators are on a downward trajectory. Eleven universities were given a Bronze rating overall in the last TEF exercise – and 21 received Bronze either for the student experience or student outcomes aspects. Of the 21, only three Bronzes were for student outcomes, but under the OfS plans, all would be graded Bronze, since any institution would be given its lowest aspect grade as its overall grade. Under the proposals, Bronze-graded institutions will need to address concerns rapidly to mitigate impacts on growth plans, funding, prestige and competitive position.
The second group facing significant challenges will be those in difficult local and regional labour markets. Of the 18 institutions with Bronze in one of the two aspects of TEF 2023, only three were graded bronze for student outcomes, whereas 15 were for student experience. Arguably this was to be expected when only two of the six features of student outcomes had associated indicators: continuation/completion and progression.
In other words, if indicators were substantially below benchmark, there were opportunities to show how outcomes were supported and educational gain was developed. Under the new proposals, the approach to assessing student outcomes is largely, if not exclusively, indicator-based, for continuation and completion. The approach is likely to reinforce differences between institutions, and especially those with intakes from underrepresented populations.
The stakes
The new TEF will play out in different ways in different parts of the sector. The regulatory focus will increase pressure on some institutions, whilst appearing to relieve it in others. For those institutions operating at 2023 Bronze levels or where 2023 Silver performance is declining, the negative consequences of a poor performance in the new TEF, which may include student number controls, will loom large in institutional strategy. The stakes are now higher for these institutions.
On the other hand, institutions whose graduate employment and earnings outcomes are strong, are likely to feel more relieved, though careful reading of the grade specifications for higher performance suggests that there is work to be done on education strategies in even the best-performing 2023 institutions.
In public policy, lifting the floor – by addressing regulatory compliance – and raising the ceiling – by promoting improvement – at the same time is always difficult, but the OfS consultation seems to have landed decisively on the side of compliance rather than improvement.