Blog

  • Online Course Gives College Students a Foundation on GenAI

    Online Course Gives College Students a Foundation on GenAI

    As more employers identify uses for generative artificial intelligence in the workplace, colleges are embedding tech skills into the curriculum to best prepare students for their careers.

    But identifying how and when to deliver that content has been a challenge, particularly given the varying perspectives different disciplines have on generative AI and when its use should be allowed. A June report from Tyton Partners found that 42 percent of students use generative AI tools at least weekly, and two-thirds of students use a singular generative AI tool like ChatGPT. A survey by Inside Higher Ed and Generation Lab found that 85 percent of students had used generative AI for coursework in the past year, most often for brainstorming or asking questions.

    The University of Mary Washington developed an asynchronous one-credit course to give all students enrolled this fall a baseline foundation of AI knowledge. The optional class, which was offered over the summer at no cost to students, introduced them to AI ethics, tools, copyright concerns and potential career impacts.

    The goal is to help students use the tools thoughtfully and intelligently, said Anand Rao, director of Mary Washington’s center for AI and the liberal arts. Initial results show most students learned something from the course, and they want more teaching on how AI applies to their majors and future careers.

    How it works: The course, IDIS 300: Introduction to AI, was offered to any new or returning UMW student to be completed any time between June and August. Students who opted in were added to a digital classroom with eight modules, each containing a short video, assigned readings, a discussion board and a quiz assignment. The class was for credit, graded as pass-fail, but didn’t fulfill any general education requirements.

    Course content ranged from how to use AI tools and prompt generative AI output to academic integrity, as well as professional development and how to critically evaluate AI responses.

    “I thought those were all really important as a starting point, and that still just scratches the surface,” Rao said.

    The course is not designed to make everyone an AI user, Rao said, “but I do want them to be able to speak thoughtfully and intelligently about the use of tools, the application of tools and when and how they make decisions in which they’ll be able to use those tools.”

    At the end of the course, students submitted a short paper analyzing an AI tool used in their field or discipline—its output, use cases and ways the tool could be improved.

    Rao developed most of the content, but he collaborated with campus stakeholders who could provide additional insight, such as the Honor Council, to lay out how AI use is articulated in the honor code.

    The impact: In total, the first class enrolled 249 students from a variety of majors and disciplines, or about 6 percent of the university’s total undergrad population. A significant number of the course enrollees were incoming freshmen. Eighty-eight percent of students passed the course, and most had positive feedback on the class content and structure.

    In postcourse surveys, 68 percent of participants indicated IDIS 300 should be a mandatory course or highly recommended for all students.

    “If you know nothing about AI, then this course is a great place to start,” said one junior, noting that the content builds from the basics to direct career applications.

    What’s next: Rao is exploring ways to scale the course in the future, including by developing intermediate or advanced classes or creating discipline-specific offerings. He’s also hoping to recruit additional instructors, because the course had some challenges given its large size, such as conducting meaningful exchanges on the discussion board.

    The center will continue to host educational and discussion-based events throughout the year to continue critical conversations regarding generative AI. The first debate, centered on AI and the environment, aims to evaluate whether AI’s impact will be a net positive or negative over the next decade, Rao said.

    The university is also considering ways to engage the wider campus community and those outside the institution with basic AI knowledge. IDIS 300 content will be made available to nonstudents this year as a Canvas page. Some teachers in the local school district said they’d like to teach the class as a dual-enrollment course in the future.

    Get more content like this directly to your inbox. Subscribe here.

    Source link

  • A Better Way to Prepare for Job Interviews (opinion)

    A Better Way to Prepare for Job Interviews (opinion)

    One of the things that makes interviews stressful is their unpredictability, which is unfortunately also what makes them so hard to prepare for. In particular, it’s impossible to predict exactly what questions you will be asked. So, how do you get ready?

    Scripting out answers for every possible question is a popular strategy but a losing battle. There are too many (maybe infinite?) possible questions and simply not enough time. In the end, you’ll spend all your time writing and you still won’t have answers to most of the questions you might face. And while it might make you feel briefly more confident, that confidence is unlikely to survive the stress and distress of the actual interview. You’ll be rigid rather than flexible, robotic rather than responsive.

    This article outlines an interview-preparation strategy that is both easier and more effective than frantic answer scripting, one that will leave you able to answer just about any interview question smoothly.

    Step 1: Themes

    While you can’t know what questions you will get, you can pretty easily predict many of the topics your interviewers will be curious about. You can be pretty sure that an interviewer will be interested in talking about collaboration, for example, even if you can’t say for sure whether they’ll ask a standard question like “Tell us about a time when you worked with a team to achieve a goal” or something weirder like “What role do you usually play on a team?”

    Your first step is to figure out the themes that their questions are most likely to touch on. Luckily, I can offer you a starter pack. Here are five topics that are likely to show up in an interview for just about any job, so it pays to prepare for them no matter what:

    1. Communication
    2. Collaboration (including conflict!)
    3. Time and project management
    4. Problem-solving and creativity
    5. Failures and setbacks

    But you also need to identify themes that are specific to the job or field you are interviewing for. For a research and development scientist position, for example, an interviewer might also be interested in innovation and scientific thinking. For a project or product manager position, they’ll probably want to know about stakeholder management. And so on.

    To identify these specific themes, check the job ad. They may have already identified themes for you by categorizing the responsibilities or qualifications, or you can just look for patterns. What topics/ideas/words come up most often in the ad that aren’t already represented in the starter pack? What kinds of skills or experience are they expecting? If you get stuck, try throwing the ad into a word cloud generator and see what it spits out.

    Ideally, try to end this step with at least three new themes, in addition to the starter pack.

    Step 2: Stories

    The strongest interview answers are anchored by a specific story from your experience, which provides a tangible demonstration about how you think and what you can do. But it’s incredibly difficult to come up with a good, relevant example in the heat of an interview, let alone to tell it effectively in a short amount of time. For that, you need some preparation and some practice.

    So for each of your themes, identify two to three relevant stories. Stories can be big (a whole project from beginning to end), or they can be small (a single interaction with a student). They can be hugely consequential (a decision you made that changed the course of your career), or they can be minor but meaningful (a small disagreement you handled well). What is most important is that the stories demonstrate your skills, experiences and attitudes clearly and compellingly.

    The point is to have a lot of material to work with, so aim for at least 10 stories total, and preferably more. The same story can apply to multiple themes, but try not to let double-dipping limit the number of stories you end up with.

    Then, for each of your stories, write an outline that gives just enough context to understand the situation, describes your actions and thinking, and says what happened at the end. Use the STAR method, if it’s useful for keeping your stories tight and focused. Shaping your stories and deciding what to say (and not say) will help your audience see your skills in action with minimal distractions. This is one of the most important parts of your prep, so take your time with it.

    Step 3: Approaches

    As important as stories are in interviewing, you usually can’t just respond to a question with a story without any framing or explanation. So you’ll want to develop language to describe some of your usual strategies, orientations or approaches to situations that fall into each of the themes. That language will help you easily link each question to one of your stories.

    So for each theme, do a little brainstorming to identify your core messaging: “What do I usually do when faced with a situation related to [THEME]?” Then write a few bullet points. (You can also reverse engineer this from the stories: Read the stories linked to a particular theme, then look for patterns in your thinking or behavior.)

    These bullet points give you what you need to form connective tissue between the specific question they ask and the story you want to tell. So if they ask, “Tell me about a time when you worked with a team to achieve a goal,” you can respond with a story and close out by describing how that illustrates a particular approach. Or if they ask, “What role do you usually play on a team?” you can start by describing how you think about collaboration and your role in it and then tell a story that illustrates that approach.

    Though we are focusing on thematic questions here, make sure to also prepare bullet points for some of the most common general interview questions, like “Why do you want this job?” and “Tell us about yourself.”

    Step 4: Bring It All Together

    You really, really, really need to practice out loud before your interview. Over the years, I’ve found that many of the graduate students and postdocs I work with spend a lot of time thinking about how they might answer questions and not nearly enough time actually trying to answer them. And so they miss the opportunity to develop the kind of fluency and flexibility that helps one navigate the unpredictable environment of an interview.

    Here’s how to use the prep you did in Steps 1-3 to practice:

    • First, practice telling each of your 10-plus stories out loud, at least three times each. The goal here is to develop fluency in your storytelling, so you can keep things focused and flowing without needing to think about it.
    • Second, for each of the bullet points you created in Step 3, practice explaining it (out loud!) a few times, ideally in a couple of different ways.
    • Third, practice bringing it all together by answering some actual interview questions. Find a long list of interview questions (like this one), then pick questions at random to answer. The randomness is important, because the goal is to practice making smooth and effective connections between questions, stories and approaches. You need to figure out what to do when you run into a question that is challenging, unexpected or just confusing.
    • And once you’ve done that, do it all again.

    In the end, you’ve created a set of building blocks that you can arrange and rearrange as needed in the moment. And it’s a set you can keep adding to with more stories and more themes, keep practicing with new questions, and keep adapting for your next interview.

    Derek Attig is assistant dean for career and professional development in the Graduate College of the University of Illinois at Urbana-Champaign. Derek is a member of the Graduate Career Consortium, an organization providing an international voice for graduate-level career and professional development leaders.

    Source link

  • Why Did College Board End Best Admissions Product? (opinion)

    Why Did College Board End Best Admissions Product? (opinion)

    Earlier this month, College Board announced its decision to kill Landscape, a race-neutral tool that allowed admissions readers to better understand a student’s context for opportunity. After an awkward 2019 rollout as the “Adversity Score,” Landscape gradually gained traction in many selective admissions offices. Among other items, the dashboard provided information on the applicant’s high school, including the economic makeup of their high school class, participation trends for Advanced Placement courses and the school’s percentile SAT scores, as well as information about the local community.

    Landscape was one of the more extensively studied interventions in the world of college admissions, reflecting how providing more information about an applicant’s circumstances can boost the likelihood of a low-income student being admitted. Admissions officers lack high-quality, detailed information on the high school environment for an estimated 25 percent of applicants, a trend that disproportionately disadvantages low-income students. Landscape helped fill that critical gap.

    While not every admissions office used it, Landscape was fairly popular within pockets of the admissions community, as it provided a more standardized, consistent way for admissions readers to understand an applicant’s environment. So why did College Board decide to ax it? In its statement on the decision, College Board noted that “federal and state policy continues to evolve around how institutions use demographic and geographic information in admissions.” The statement seems to be referring to the Trump administration’s nonbinding guidance that institutions should not use geographic targeting as a proxy for race in admissions.

    If College Board was worried that somehow people were using the tool as a proxy for race (and they weren’t), well, it wasn’t a very good one. In the most comprehensive study of Landscape being used on the ground, researchers found that it didn’t do anything to increase racial/ethnic diversity in admissions. Things are different when it comes to economic diversity. Use of Landscape is linked with a boost in the likelihood of admission for low-income students. As such, it was a helpful tool given the continued underrepresentation of low-income students at selective institutions.

    Still, no study to date found that Landscape had any effect on racial/ethnic diversity. The findings are unsurprising. After all, Landscape was, to quote College Board, “intentionally developed without the use or consideration of data on race or ethnicity.” If you look at the laundry list of items included in Landscape, absent are items like the racial/ethnic demographics of the high school, neighborhood or community.

    While race and class are correlated, they certainly aren’t interchangeable. Admissions officers weren’t using Landscape as a proxy for race; they were using it to compare a student’s SAT score or AP course load to those of their high school classmates. Ivy League institutions that have gone back to requiring SAT/ACT scores have stressed the importance of evaluating test scores in the student’s high school context. Eliminating Landscape makes it harder to do so.

    An important consideration: Even if using Landscape were linked with increased racial/ethnic diversity, its usage would not violate the law. The Supreme Court recently declined to hear the case Coalition for TJ v. Fairfax County School Board. In declining to hear the case, the court has likely issued a tacit blessing on race-neutral methods to advance diversity in admissions. The decision leaves the Fourth Circuit opinion, which affirmed the race-neutral admissions policy used to boost diversity at Thomas Jefferson High School for Science and Technology, intact.

    The court also recognized the validity of race-neutral methods to pursue diversity in the 1989 case J.A. Croson v. City of Richmond. In a concurring opinion filed in Students for Fair Admission (SFFA) v. Harvard, Justice Brett Kavanaugh quoted Justice Antonin Scalia’s words from Croson: “And governments and universities still ‘can, of course, act to undo the effects of past discrimination in many permissible ways that do not involve classification by race.’”

    College Board’s decision to ditch Landscape sends an incredibly problematic message: that tools to pursue diversity, even economic diversity, aren’t worth defending due to the fear of litigation. If a giant like College Board won’t stand behind its own perfectly legal effort to support diversity, what kind of message does that send? Regardless, colleges and universities need to remember their commitments to diversity, both racial and economic. Yes, post-SFFA, race-conscious admissions has been considerably restricted. Still, despite the bluster of the Trump administration, most tools commonly used to expand access remain legal.

    The decision to kill Landscape is incredibly disappointing, both pragmatically and symbolically. It’s a loss for efforts to broaden economic diversity at elite institutions, yet another casualty in the Trump administration’s assault on diversity. Even if the College Board has decided to abandon Landscape, institutions must not forget their obligations to make higher education more accessible to low-income students of all races and ethnicities.

    Source link

  • Selecting and Supporting New Vice Chancellors: Reflections on Process & Practice – PART 1 

    Selecting and Supporting New Vice Chancellors: Reflections on Process & Practice – PART 1 

    • This HEPI blog was kindly authored by Dr Tom Kennie, Director of Ranmore.
    • Over the weekend, HEPI director Nick Hillman blogged about the forthcoming party conferences and the start of the new academic year. Read more here.

    Introduction 

    Over the last few months, a number of well-informed commentators have focused on understanding the past, present and to some extent, future context associated with the appointment of Vice Chancellors in the UK. See Tessa Harrison and Josh Freeman of Gatensby Sanderson Jamie Cumming-Wesley of WittKieffer and Paul Greatrix

    In this and a subsequent blog post, I want to complement these works with some practice-informed reflections from my work with many senior higher education leaders. I also aim to open a debate about optimising the selection and support for new Vice Chancellors by challenging some current practices. 

    Reflections to consider when recruiting Vice Chancellors 

    Adopt a different team-based approach 

    Clearly, all appointment processes are team-based – undertaken by a selection committee. For this type of appointment, however, we need a different approach which takes collective responsibility as a ‘Selection and Transition Team’. What’s the difference? In this second approach, the team take a wider remit with responsibility for the full life cycle of the process from search to selection to handover and transition into role. The team also oversee any interim arrangements if a gap in time exists between the existing leader leaving and the successor arriving. This is often overlooked.  

    The Six Keys to a Successful Presidential Transition is an interesting overview of this approach in Canada. 

    Pre-search diagnosis  

    Pre-search diagnosis (whether involving a search and selection firm or not) is often underestimated in its importance or is under-resourced. Before you start to search for a candidate to lead a university, you need to ensure those involved are all ‘on the same page’. Sometimes they are, but in other cases they fail to recognise that they are on the same, but wrong, page. Classically, this may be to find someone to lead the organisation of today, and a failure to consider the place they seek to be in 10 years. Before appointing a search firm, part of the solution is to ensure you have a shared understanding of the type of universityyou are seeking someone to lead.   

    • Role balance and capabilities 

    A further diagnostic issue, linked to the former point, is to be very clear about the balance of capabilities required in your selected candidate. One way of framing this is to assess the candidate balance across a number of dimensions, including:  

    • The Chief Academic Officer (CAO) capabilities; more operational and internally focussed. 
    • The Chief Executive Officer (CEO) capabilities; more strategic and initially internally focussed. 
    • The Chief Civic Officer (CCO) capabilities: more strategic and externally focussed; and 
    • The Chief Stakeholder Relationship Officer (CSRO): more operational and externally focussed. 

    All four matter. One astute Vice Chancellor suggested to me a fifth; Chief Storytelling Officer (CSO). 

    Search firm or not?   

    The decision as to whether to use a search firm is rarely considered today – it is assumed you will use one. It is, however, worth pausing to reflect on this issue, if only to be very clear about what you are seeking from a search firm. What criteria should you use to select one? Are you going with one who you already use, or have used, or are you open to new players (both to you and to the higher education market)? The latter might be relevant if you are seeking to extend your search to candidates who have a career trajectory beyond higher education.  

    ‘Listing’ – how and by whom?   

    Searching should lead to many potential candidates Selecting who to consider is typically undertaken through a long-listing process and from this a short-list is created. Make sure you understand how this will be undertaken and who will be doing it. When was the last time you asked to review the larger list from which the long list was taken?  

    Psychometrics – why, which and how? 

    A related matter involves the use of any psychometric instruments proposed to form part of the selection process. They are often included –yet the rationale for this is often unclear. As is the question of how the data will be used. Equally importantly, if the judgment is that it should be included, who should undertake the process? Whichever route you take, you would be wise to read Andrew Munro’s recent book on the topic, Personality Testing In Employee Selection: Challenges, Controversies and Future Directions 

    Balance questions with scenarios and dilemmas 

    Given the complexity of the role of the Vice Chancellor, it is clearly important to assess candidates across a wide range of criteria. Whilst a question-and-answer process can elicit some evidence, we should all be aware of the limitations of such a process. Complementing the former with a well-considered scenario-based processes involving a series of dilemmas, which candidates are invited to consider, is less common than it should be. 

    Rehearse final decision scenarios  

    If you are fortunate as a selection panel, after having considered many different sources of evidence, you will reach a collective, unanimous decision about the candidate you wish to offer the position. Job almost done. More likely, however, you will have more than one preferred candidate – each providing evidence to be appointable albeit with evidence of gaps in some areas. Occasionally, you may also have reached an impasse where strong cases are made to appoint two equally appointable candidates. Preparing for these situations by considering them in advance. In some cases, the first time such situations are considered are during the final stage of the selection exercise. 

    In part 2 I’ll focus more on support and how to ensure the leadership transition is given as much attention as candidate selection. 

    Source link

  • Future-Proof Students’ (and Our) Careers by Building Uniquely Human Capacities – Faculty Focus

    Future-Proof Students’ (and Our) Careers by Building Uniquely Human Capacities – Faculty Focus

    Source link

  • From improvement to compliance – a significant shift in the purpose of the TEF

    From improvement to compliance – a significant shift in the purpose of the TEF

    The Teaching Excellence Framework has always had multiple aims.

    It was partly intended to rebalance institutional focus from research towards teaching and student experience. Jo Johnson, the minister who implemented it, saw it as a means of increasing undergraduate teaching resources in line with inflation.

    Dame Shirley Pearce prioritised enhancing quality in her excellent review of TEF implementation. And there have been other purposes of the TEF: a device to support regulatory interventions where quality fell below required thresholds, and as a resource for student choice.

    And none of this should ignore its enthusiastic adoption by student recruitment teams as a marketing tool.

    As former Chair and Deputy Chair of the TEF, we are perhaps more aware than most of these competing purposes, and more experienced in understanding how regulators, institutions and assessors have navigated the complexity of TEF implementation. The TEF has had its critics – something else we are keenly aware of – but it has had a marked impact.

    Its benchmarked indicator sets have driven a data-informed and strategic approach to institutional improvement. Its concern with disparities for underrepresented groups has raised the profile of equity in institutional education strategies. Its whole institution sweep has made institutions alert to the consequences of poorly targeted education strategies and prioritised improvement goals. Now, the publication of the OfS’s consultation paper on the future of the TEF is an opportunity to reflect on how the TEF is changing and what it means for the regulatory and quality framework in England.

    A shift in purpose

    The consultation proposes that the TEF becomes part of what the OfS sees as a more integrated quality system. All registered providers will face TEF assessments, with no exemptions for small providers. Given the number of new providers seeking OfS registration, it is likely that the number to be assessed will be considerably larger than the 227 institutions in the 2023 TEF.

    Partly because of the larger number of assessments to be undertaken, TEF will move to a rolling cycle, with a pool of assessors. Institutions will still be awarded three grades – one for outcomes, one for experience and one overall, but their overall grade will simply be the lower of the two other grades. The real impact of this will be on Bronze-rated providers who could find themselves subject to a range of measures, potentially including student number controls or fee constraints, until they show improvement.

    The OfS consultation paper marks a significant shift in the purpose of the TEF, from quality enhancement to regulation and from improvement to compliance. The most significant changes are at the lower end of assessed performance. The consultation paper makes sensible changes to aspects of the TEF which always posed challenges for assessors and regulators, tidying up the relationship between the threshold B3 standards and the lowest TEF grades. It correctly separates measures of institutional performance on continuation and completion – over which institutions have more direct influence – from progression to employment – over which institutions have less influence.

    Pressure points

    But it does this at some heavy costs. By treating the Bronze grade as a measure of performance at, rather than above, threshold quality, it will produce just two grades above the threshold. In shifting the focus towards quantitative indicators and away from institutional discussion of context, it will make TEF life more difficult for further education institutions and institutions in locations with challenging graduate labour markets. The replacement of the student submission with student focus groups may allow more depth on some issues, but comes at the expense of breadth, and the student voice is, disappointingly, weakened.

    There are further losses as the regulatory purpose is embedded. The most significant is the move away from educational gain, and this is a real loss: following TEF 2023, almost all institutions were developing their approaches to and evaluation of educational gain, and we have seen many examples where this was shaping fruitful approaches to articulating institutional goals and the way they shape educational provision.

    Educational gain is an area in which institutions were increasingly thinking about distinctiveness and how it informs student experience. It is a real loss to see it go, and it will weaken the power of many education strategies. It is almost certainly the case that the ideas of educational gain and distinctiveness are going to be required for confident performance at the highest levels of achievement, but it is a real pity that it is less explicit. Educational gain can drive distinctiveness, and distinctiveness can drive quality.

    Two sorts of institutions will face the most significant challenges. The first, obviously, are providers rated Bronze in 2023, or Silver-rated providers whose indicators are on a downward trajectory. Eleven universities were given a Bronze rating overall in the last TEF exercise – and 21 received Bronze either for the student experience or student outcomes aspects. Of the 21, only three Bronzes were for student outcomes, but under the OfS plans, all would be graded Bronze, since any institution would be given its lowest aspect grade as its overall grade. Under the proposals, Bronze-graded institutions will need to address concerns rapidly to mitigate impacts on growth plans, funding, prestige and competitive position.

    The second group facing significant challenges will be those in difficult local and regional labour markets. Of the 18 institutions with Bronze in one of the two aspects of TEF 2023, only three were graded bronze for student outcomes, whereas 15 were for student experience. Arguably this was to be expected when only two of the six features of student outcomes had associated indicators: continuation/completion and progression.

    In other words, if indicators were substantially below benchmark, there were opportunities to show how outcomes were supported and educational gain was developed. Under the new proposals, the approach to assessing student outcomes is largely, if not exclusively, indicator-based, for continuation and completion. The approach is likely to reinforce differences between institutions, and especially those with intakes from underrepresented populations.

    The stakes

    The new TEF will play out in different ways in different parts of the sector. The regulatory focus will increase pressure on some institutions, whilst appearing to relieve it in others. For those institutions operating at 2023 Bronze levels or where 2023 Silver performance is declining, the negative consequences of a poor performance in the new TEF, which may include student number controls, will loom large in institutional strategy. The stakes are now higher for these institutions.

    On the other hand, institutions whose graduate employment and earnings outcomes are strong, are likely to feel more relieved, though careful reading of the grade specifications for higher performance suggests that there is work to be done on education strategies in even the best-performing 2023 institutions.

    In public policy, lifting the floor – by addressing regulatory compliance – and raising the ceiling – by promoting improvement – at the same time is always difficult, but the OfS consultation seems to have landed decisively on the side of compliance rather than improvement.

    Source link

  • Inquiry calls for vice-chancellor pay caps – Campus Review

    Inquiry calls for vice-chancellor pay caps – Campus Review

    A senate inquiry has recommended Australian universities cap remuneration for vice-chancellors and senior executives, finding they are rewarded too generously compared to other staff and international peers.

    Please login below to view content or subscribe now.

    Membership Login

    Source link

  • UTS can’t blame policy for cuts: Minister – Campus Review

    UTS can’t blame policy for cuts: Minister – Campus Review

    The University of Technology Sydney (UTS) has been met with widespread criticism from the federal and NSW governments for its plan to cut 1100 subjects including its entire teacher education program.

    Please login below to view content or subscribe now.

    Membership Login

    Source link

  • Students score universities on experience – Campus Review

    Students score universities on experience – Campus Review

    Three private universities offer the best student experience out of all Australian institutions according to the latest student experience survey, with the University of Divinity ranked number one overall.

    Please login below to view content or subscribe now.

    Membership Login

    Source link

  • OfS’ understanding of the student interest requires improvement

    OfS’ understanding of the student interest requires improvement

    When the Office for Students’ (OfS) proposals for a new quality assessment system for England appeared in the inbox, I happened to be on a lunchbreak from delivering training at a students’ union.

    My own jaw had hit the floor several times during my initial skim of its 101 pages – and so to test the validity of my initial reactions, I attempted to explain, in good faith, the emerging system to the student leaders who had reappeared for the afternoon.

    Having explained that the regulator was hoping to provide students with a “clear view of the quality of teaching and learning” at the university, their first confusion was tied up in the idea that this was even possible in a university with 25,000 students and hundreds of degree courses.

    They’d assumed that some sort of dashboard might be produced that would help students differentiate between at least departments if not courses. When I explained that the “view” would largely be in the form of a single “medal” of Gold, Silver, Bronze or Requires improvement for the whole university, I was met with confusion.

    We’d spent some time before the break discussing the postgraduate student experience – including poor induction for international students, the lack of a policy on supervision for PGTs, and the isolation that PGRs had fed into the SU’s strategy exercise.

    When I explained that OfS was planning to introduce a PGT NSS in 2028 and then use that data in the TEF from 2030-31 – such that their university might not have the data taken into account until 2032-33 – I was met with derision. When I explained that PGRs may be incorporated from 2030–31 onwards, I was met with scorn.

    Keen to know how students might feed in, one officer asked how their views would be taken into account. I explained that as well as the NSS, the SU would have the option to create a written submission to provide contextual insight into the numbers. When one of them observed that “being honest in that will be a challenge given student numbers are falling and so is the SU’s funding”, the union’s voice coordinator (who’d been involved in the 2023 exercise) in the corner offered a wry smile.

    One of the officers – who’d had a rewarding time at the university pretty much despite their actual course – wanted to know if the system was going to tackle students like them not really feeling like they’d learned anything during their degree. Given the proposals’ intention to drop educational gain altogether, I moved on at this point. Young people have had enough of being let down.

    I’m not at home in my own home

    Back in February, you might recall that OfS published a summary of a programme of polling and focus groups that it had undertaken to understand what students wanted and needed from their higher education – and the extent to which they were getting it.

    At roughly the same time, it published proposals for a new initial Condition C5: Treating students fairly, to apply initially to newly registered providers, which drew on that research.

    As well as issues it had identified with things like contractual provisions, hidden costs and withdrawn offers, it was particularly concerned with the risk that students may take a decision about what and where to study based on false, misleading or exaggerated information.

    OfS’ own research into the Teaching Excellence Framework 2023 signals one of the culprits for that misleading. Polling by Savanta in April and May 2024, and follow-up focus groups with prospective undergraduates over the summer both showed that applicants consistently described TEF outcomes as too broad to be of real use for their specific course decisions.

    They wanted clarity about employability rates, continuation statistics, and job placements – but what they got instead was a single provider-wide badge. Many struggled to see meaningful differences between Gold and Silver, or to reconcile how radically different providers could both hold Gold.

    The evidence also showed that while a Gold award could reassure applicants, more than one in five students aware of their provider’s TEF rating disagreed that it was a fair reflection of their own experience. That credibility gap matters.

    If the TEF continues to offer a single label for an entire university, with data that are both dated and aggregated, there is a clear danger that students will once again be misled – this time not by hidden costs or unfair contracts, but by the regulatory tool that is supposed to help them make informed choices.

    You don’t know what I’m feeling

    Absolutely central to the TEF will remain results of the National Student Survey (NSS).

    OfS says that’s because “the NSS remains the only consistently collected, UK-wide dataset that directly captures students’ views on their teaching, learning, and academic support,” and because “its long-running use provides reliable benchmarked data which allows for meaningful comparison across providers and trends over time.”

    It stresses that the survey provides an important “direct line to student perceptions,” which balances outcomes data and adds depth to panel judgements. In other words, the NSS is positioned as an indispensable barometer of student experience in a system that otherwise leans heavily on outcomes.

    But set aside the fact that it surveys only those who make it to the final year of a full undergraduate degree. The NSS doesn’t ask whether students felt their course content was up to date with current scholarship and professional practice, or whether learning outcomes were coherent and built systematically across modules and years — both central expectations under B1 (Academic experience).

    It doesn’t check whether students received targeted support to close knowledge or skills gaps, or whether they were given clear help to avoid academic misconduct through essay planning, referencing, and understanding rules – requirements spelled out in the guidance to B2 (Resources, support and engagement). It also misses whether students were confident that staff were able to teach effectively online, and whether the learning environment – including hardware, software, internet reliability, and access to study spaces – actually enabled them to learn. Again, explicit in B2, but invisible in the survey.

    On assessment, the NSS asks about clarity, fairness, and usefulness of feedback, but it doesn’t cover whether assessment methods really tested what students had been taught, whether tasks felt valid for measuring the intended outcomes, or whether students believed their assessments prepared them for professional standards. Yet B4 (Assessment and awards) requires assessments to be valid and reliable, moderated, and robust against misconduct – areas NSS perceptions can’t evidence.

    I could go on. The survey provides snapshots of the learning experience but leaves out important perception checks on the coherence, currency, integrity, and fitness-for-purpose of teaching and learning, which the B conditions (and students) expect providers to secure.

    And crucially, OfS has chosen not to use the NSS questions on organisation and management in the future TEF at all. That’s despite its own 2025 press release highlighting it as one of the weakest-performing themes in the sector – just 78.5 per cent of students responded positively – and pointing out that disabled students in particular reported significantly worse experiences than their peers.

    OfS said then that “institutions across the sector could be doing more to ensure disabled students are getting the high quality higher education experience they are entitled to,” and noted that the gap between disabled and non-disabled students was growing in organisation and management. In other words, not only is the NSS not fit for purpose, OfS’ intended use of it isn’t either.

    I followed the voice, you gave to me

    In the 2023 iteration of the TEF, the independent student submission was supposed to be one of the most exciting innovations. It was billed as a crucial opportunity for providers’ students to tell their own story – not mediated through NSS data or provider spin, but directly and independently. In OfS’ words, the student submission provided “additional insights” that would strengthen the panel’s ability to judge whether teaching and learning really were excellent.

    In this consultation, OfS says it wants to “retain the option of student input,” but with tweaks. The headline change is that the student submission would no longer need to cover “student outcomes” – an area that SUs often struggled with given the technicalities of data and the lack of obvious levers for student involvement.

    On the surface, that looks like a kindness – but scratch beneath the surface, and it’s a red flag. Part of the point of Condition B2.2b is that providers must take all reasonable steps to ensure effective engagement with each cohort of students so that “those students succeed in and beyond higher education.”

    If students’ unions feel unable to comment on how the wider student experience enables (or obstructs) student success and progression, that’s not a reason to delete it from the student submission. It’s a sign that something is wrong with the way providers involve students in what’s done to understand and shape outcomes.

    The trouble is that the light touch response ignores the depth of feedback it has already commissioned and received. Both the IFF evaluation of TEF 2023 and OfS’ own survey of student contacts documented the serious problems that student reps and students’ unions faced.

    They said the submission window was far too short – dropping guidance in October, demanding a January deadline, colliding with elections, holidays, and strikes. They said the guidance was late, vague, inaccessible, and offered no examples. They said the template was too broad to be useful. They said the burden on small and under-resourced SUs was overwhelming, and even large ones had to divert staff time away from core activity.

    They described barriers to data access – patchy dashboards, GDPR excuses, lack of analytical support. They noted that almost a third didn’t feel fully free to say what they wanted, with some monitored by staff while writing. And they told OfS that the short, high-stakes process created self-censorship, strained relationships, and duplication without impact.

    The consultation documents brush most of that aside. Little in the proposals tackles the resourcing, timing, independence, or data access problems that students actually raised.

    I’m not at home in my own home

    OfS also proposes to commission “alternative forms of evidence” – like focus groups or online meetings – where students aren’t able to produce a written submission. The regulator’s claim is that this will reduce burden, increase consistency, and make it easier to secure independent student views.

    The focus group idea is especially odd. Student representatives’ main complaint wasn’t that they couldn’t find the words – it was that they lacked the time, resource, support, and independence to tell the truth. Running a one-off OfS focus group with a handful of students doesn’t solve that. It actively sidesteps the standard in B2 and the DAPs rules on embedding students in governance and representation structures.

    If a student body struggles to marshal the evidence and write the submission, the answer should be to ask whether the provider is genuinely complying with the regulatory conditions on student engagement. Farming the job out to OfS-run focus groups allows providers with weak student partnership arrangements to escape scrutiny – precisely the opposite of what the student submission was designed to do.

    The point is that the quality of a student submission is not just a “nice to have” extra insight for the TEF panel. It is, in itself, evidence of whether a provider is complying with Condition B2. It requires providers to take all reasonable steps to ensure effective engagement with each cohort of students, and says students should make an effective contribution to academic governance.

    If students can’t access data, don’t have the collective capacity to contribute, or are cowed into self-censorship, that is not just a TEF design flaw – it is B2 evidence of non-compliance. The fact that OfS has never linked student submission struggles to B2 is bizarre. Instead of drawing on the submissions as intelligence about engagement, the regulator has treated them as optional extras.

    The refusal to make that link is even stranger when compared to what came before. Under the old QAA Institutional Review process, the student written submission was long-established, resourced, and formative. SUs had months to prepare, could share drafts, and had the time and support to work with managers on solutions before a review team arrived. It meant students could be honest without the immediate risk of reputational harm, and providers had a chance to act before being judged.

    TEF 2023 was summative from the start, rushed and high-stakes, with no requirement on providers to demonstrate they had acted on feedback. The QAA model was designed with SUs and built around partnership – the TEF model was imposed by OfS and designed around panel efficiency. OfS has learned little from the feedback from those who submitted.

    But now I’ve gotta find my own

    While I’m on the subject of learning, we should finally consider how far the proposals have drifted from the lessons of Dame Shirley Pearce’s review. Back in 2019, her panel made a point of recording what students had said loud and clear – the lack of learning gain in TEF was a fundamental flaw.

    In fact, educational gain was the single most commonly requested addition to the framework, championed by students and their representatives who argued that without it, TEF risked reducing success to continuation and jobs.

    Students told the review they wanted a system that showed whether higher education was really developing their knowledge, skills, and personal growth. They wanted recognition of the confidence, resilience, and intellectual development that are as much the point of university as a payslip.

    Pearce’s panel agreed, recommending that Educational Gains should become a fourth formal aspect of TEF, encompassing both academic achievement and personal development. Crucially, the absence of a perfect national measure was not seen as a reason to ignore the issue. Providers, the panel said, should articulate their own ambitions and evidence of gain, in line with their mission, because failing to even try left a gaping hole at the heart of quality assessment.

    Fast forward to now, and OfS is proposing to abandon the concept entirely. To students and SUs who have been told for years that their views shape regulation, the move is a slap in the face. A regulator that once promised to capture the full richness of the student experience is now narrowing the lens to what can be benchmarked in spreadsheets. The result is a framework that tells students almost nothing about what they most want to know – whether their education will help them grow.

    You see the same lack of learning in the handling of extracurricular and co-curricular activity. For students, societies, volunteering, placements, and cocurricular opportunities are not optional extras but integral to how they build belonging, develop skills, and prepare for life beyond university. Access to these opportunities feature heavily in the Access and Participation Risk Register precisely because they matter to student success and because they’re a part of the educational offer in and of themselves.

    But in TEF 2023 OfS tied itself in knots over whether they “count” — at times allowing them in if narrowly framed as “educational”, at other times excluding them altogether. To students who know how much they learn outside of the lecture theatre, the distinction looked absurd. Now the killing off of educational gain excludes them all together.

    You should have listened

    Taken together, OfS has delivered a masterclass in demonstrating how little it has learned from students. As a result, the body that once promised to put student voice at the centre of regulation is in danger of constructing a TEF that is both incomplete and actively misleading.

    It’s a running theme – more evidence that OfS is not interested enough in genuinely empowering students. If students don’t know what they can, should, or could expect from their education – because the standards are vague, the metrics are aggregated, and the judgements are opaque – then their representatives won’t know either. And if their reps don’t know, their students’ union can’t effectively advocate for change.

    When the only judgements against standards that OfS is interested in come from OfS itself, delivered through a very narrow funnel of risk-based regulation, that funnel inevitably gets choked off through appeals to “reduced burden” and aggregated medals that tell students nothing meaningful about their actual course or experience. The result is a system that talks about student voice while systematically disempowering the very students it claims to serve.

    In the consultation, OfS says that it wants its new quality system to be recognised as compliant with the European Standards and Guidelines (ESG), which would in time allow it to seek membership of the European Quality Assurance Register (EQAR). That’s important for providers with international partnerships and recruitment ambitions, and for students given that ESG recognition underpins trust, mobility, and recognition across the European Higher Education Area.

    But OfS’ conditions don’t require co-design of the quality assurance framework itself, nor proof that student views shape outcomes. Its proposals expand student assessor roles in the TEF, but don’t guarantee systematic involvement in all external reviews or transparency of outcomes – both central to ESG. And as the ongoing QA-FIT project and ESU have argued, the next revision of the ESG is likely to push student engagement further, emphasising co-creation, culture, and demonstrable impact.

    If it does apply for EQAR recognition, our European peers will surely notice what English students already know – the gap between OfS’ rhetoric on student partnership and the reality of its actual understanding and actions is becoming impossible to ignore.

    When I told those student officers back on campus that their university would be spending £25,000 of their student fee income every time it has to take part in the exercise, their anger was palpable. When I added that according to the new OfS chair, Silver and Gold might enable higher fees, while Bronze or “Requires Improvement” might cap or further reduce their student numbers, they didn’t actually believe me.

    The student interest? Hardly.

    Source link