Author: admin

  • Why Did College Board End Best Admissions Product? (opinion)

    Why Did College Board End Best Admissions Product? (opinion)

    Earlier this month, College Board announced its decision to kill Landscape, a race-neutral tool that allowed admissions readers to better understand a student’s context for opportunity. After an awkward 2019 rollout as the “Adversity Score,” Landscape gradually gained traction in many selective admissions offices. Among other items, the dashboard provided information on the applicant’s high school, including the economic makeup of their high school class, participation trends for Advanced Placement courses and the school’s percentile SAT scores, as well as information about the local community.

    Landscape was one of the more extensively studied interventions in the world of college admissions, reflecting how providing more information about an applicant’s circumstances can boost the likelihood of a low-income student being admitted. Admissions officers lack high-quality, detailed information on the high school environment for an estimated 25 percent of applicants, a trend that disproportionately disadvantages low-income students. Landscape helped fill that critical gap.

    While not every admissions office used it, Landscape was fairly popular within pockets of the admissions community, as it provided a more standardized, consistent way for admissions readers to understand an applicant’s environment. So why did College Board decide to ax it? In its statement on the decision, College Board noted that “federal and state policy continues to evolve around how institutions use demographic and geographic information in admissions.” The statement seems to be referring to the Trump administration’s nonbinding guidance that institutions should not use geographic targeting as a proxy for race in admissions.

    If College Board was worried that somehow people were using the tool as a proxy for race (and they weren’t), well, it wasn’t a very good one. In the most comprehensive study of Landscape being used on the ground, researchers found that it didn’t do anything to increase racial/ethnic diversity in admissions. Things are different when it comes to economic diversity. Use of Landscape is linked with a boost in the likelihood of admission for low-income students. As such, it was a helpful tool given the continued underrepresentation of low-income students at selective institutions.

    Still, no study to date found that Landscape had any effect on racial/ethnic diversity. The findings are unsurprising. After all, Landscape was, to quote College Board, “intentionally developed without the use or consideration of data on race or ethnicity.” If you look at the laundry list of items included in Landscape, absent are items like the racial/ethnic demographics of the high school, neighborhood or community.

    While race and class are correlated, they certainly aren’t interchangeable. Admissions officers weren’t using Landscape as a proxy for race; they were using it to compare a student’s SAT score or AP course load to those of their high school classmates. Ivy League institutions that have gone back to requiring SAT/ACT scores have stressed the importance of evaluating test scores in the student’s high school context. Eliminating Landscape makes it harder to do so.

    An important consideration: Even if using Landscape were linked with increased racial/ethnic diversity, its usage would not violate the law. The Supreme Court recently declined to hear the case Coalition for TJ v. Fairfax County School Board. In declining to hear the case, the court has likely issued a tacit blessing on race-neutral methods to advance diversity in admissions. The decision leaves the Fourth Circuit opinion, which affirmed the race-neutral admissions policy used to boost diversity at Thomas Jefferson High School for Science and Technology, intact.

    The court also recognized the validity of race-neutral methods to pursue diversity in the 1989 case J.A. Croson v. City of Richmond. In a concurring opinion filed in Students for Fair Admission (SFFA) v. Harvard, Justice Brett Kavanaugh quoted Justice Antonin Scalia’s words from Croson: “And governments and universities still ‘can, of course, act to undo the effects of past discrimination in many permissible ways that do not involve classification by race.’”

    College Board’s decision to ditch Landscape sends an incredibly problematic message: that tools to pursue diversity, even economic diversity, aren’t worth defending due to the fear of litigation. If a giant like College Board won’t stand behind its own perfectly legal effort to support diversity, what kind of message does that send? Regardless, colleges and universities need to remember their commitments to diversity, both racial and economic. Yes, post-SFFA, race-conscious admissions has been considerably restricted. Still, despite the bluster of the Trump administration, most tools commonly used to expand access remain legal.

    The decision to kill Landscape is incredibly disappointing, both pragmatically and symbolically. It’s a loss for efforts to broaden economic diversity at elite institutions, yet another casualty in the Trump administration’s assault on diversity. Even if the College Board has decided to abandon Landscape, institutions must not forget their obligations to make higher education more accessible to low-income students of all races and ethnicities.

    Source link

  • Online Course Gives College Students a Foundation on GenAI

    Online Course Gives College Students a Foundation on GenAI

    As more employers identify uses for generative artificial intelligence in the workplace, colleges are embedding tech skills into the curriculum to best prepare students for their careers.

    But identifying how and when to deliver that content has been a challenge, particularly given the varying perspectives different disciplines have on generative AI and when its use should be allowed. A June report from Tyton Partners found that 42 percent of students use generative AI tools at least weekly, and two-thirds of students use a singular generative AI tool like ChatGPT. A survey by Inside Higher Ed and Generation Lab found that 85 percent of students had used generative AI for coursework in the past year, most often for brainstorming or asking questions.

    The University of Mary Washington developed an asynchronous one-credit course to give all students enrolled this fall a baseline foundation of AI knowledge. The optional class, which was offered over the summer at no cost to students, introduced them to AI ethics, tools, copyright concerns and potential career impacts.

    The goal is to help students use the tools thoughtfully and intelligently, said Anand Rao, director of Mary Washington’s center for AI and the liberal arts. Initial results show most students learned something from the course, and they want more teaching on how AI applies to their majors and future careers.

    How it works: The course, IDIS 300: Introduction to AI, was offered to any new or returning UMW student to be completed any time between June and August. Students who opted in were added to a digital classroom with eight modules, each containing a short video, assigned readings, a discussion board and a quiz assignment. The class was for credit, graded as pass-fail, but didn’t fulfill any general education requirements.

    Course content ranged from how to use AI tools and prompt generative AI output to academic integrity, as well as professional development and how to critically evaluate AI responses.

    “I thought those were all really important as a starting point, and that still just scratches the surface,” Rao said.

    The course is not designed to make everyone an AI user, Rao said, “but I do want them to be able to speak thoughtfully and intelligently about the use of tools, the application of tools and when and how they make decisions in which they’ll be able to use those tools.”

    At the end of the course, students submitted a short paper analyzing an AI tool used in their field or discipline—its output, use cases and ways the tool could be improved.

    Rao developed most of the content, but he collaborated with campus stakeholders who could provide additional insight, such as the Honor Council, to lay out how AI use is articulated in the honor code.

    The impact: In total, the first class enrolled 249 students from a variety of majors and disciplines, or about 6 percent of the university’s total undergrad population. A significant number of the course enrollees were incoming freshmen. Eighty-eight percent of students passed the course, and most had positive feedback on the class content and structure.

    In postcourse surveys, 68 percent of participants indicated IDIS 300 should be a mandatory course or highly recommended for all students.

    “If you know nothing about AI, then this course is a great place to start,” said one junior, noting that the content builds from the basics to direct career applications.

    What’s next: Rao is exploring ways to scale the course in the future, including by developing intermediate or advanced classes or creating discipline-specific offerings. He’s also hoping to recruit additional instructors, because the course had some challenges given its large size, such as conducting meaningful exchanges on the discussion board.

    The center will continue to host educational and discussion-based events throughout the year to continue critical conversations regarding generative AI. The first debate, centered on AI and the environment, aims to evaluate whether AI’s impact will be a net positive or negative over the next decade, Rao said.

    The university is also considering ways to engage the wider campus community and those outside the institution with basic AI knowledge. IDIS 300 content will be made available to nonstudents this year as a Canvas page. Some teachers in the local school district said they’d like to teach the class as a dual-enrollment course in the future.

    Get more content like this directly to your inbox. Subscribe here.

    Source link

  • Selecting and Supporting New Vice Chancellors: Reflections on Process & Practice – PART 1 

    Selecting and Supporting New Vice Chancellors: Reflections on Process & Practice – PART 1 

    • This HEPI blog was kindly authored by Dr Tom Kennie, Director of Ranmore.
    • Over the weekend, HEPI director Nick Hillman blogged about the forthcoming party conferences and the start of the new academic year. Read more here.

    Introduction 

    Over the last few months, a number of well-informed commentators have focused on understanding the past, present and to some extent, future context associated with the appointment of Vice Chancellors in the UK. See Tessa Harrison and Josh Freeman of Gatensby Sanderson Jamie Cumming-Wesley of WittKieffer and Paul Greatrix

    In this and a subsequent blog post, I want to complement these works with some practice-informed reflections from my work with many senior higher education leaders. I also aim to open a debate about optimising the selection and support for new Vice Chancellors by challenging some current practices. 

    Reflections to consider when recruiting Vice Chancellors 

    Adopt a different team-based approach 

    Clearly, all appointment processes are team-based – undertaken by a selection committee. For this type of appointment, however, we need a different approach which takes collective responsibility as a ‘Selection and Transition Team’. What’s the difference? In this second approach, the team take a wider remit with responsibility for the full life cycle of the process from search to selection to handover and transition into role. The team also oversee any interim arrangements if a gap in time exists between the existing leader leaving and the successor arriving. This is often overlooked.  

    The Six Keys to a Successful Presidential Transition is an interesting overview of this approach in Canada. 

    Pre-search diagnosis  

    Pre-search diagnosis (whether involving a search and selection firm or not) is often underestimated in its importance or is under-resourced. Before you start to search for a candidate to lead a university, you need to ensure those involved are all ‘on the same page’. Sometimes they are, but in other cases they fail to recognise that they are on the same, but wrong, page. Classically, this may be to find someone to lead the organisation of today, and a failure to consider the place they seek to be in 10 years. Before appointing a search firm, part of the solution is to ensure you have a shared understanding of the type of universityyou are seeking someone to lead.   

    • Role balance and capabilities 

    A further diagnostic issue, linked to the former point, is to be very clear about the balance of capabilities required in your selected candidate. One way of framing this is to assess the candidate balance across a number of dimensions, including:  

    • The Chief Academic Officer (CAO) capabilities; more operational and internally focussed. 
    • The Chief Executive Officer (CEO) capabilities; more strategic and initially internally focussed. 
    • The Chief Civic Officer (CCO) capabilities: more strategic and externally focussed; and 
    • The Chief Stakeholder Relationship Officer (CSRO): more operational and externally focussed. 

    All four matter. One astute Vice Chancellor suggested to me a fifth; Chief Storytelling Officer (CSO). 

    Search firm or not?   

    The decision as to whether to use a search firm is rarely considered today – it is assumed you will use one. It is, however, worth pausing to reflect on this issue, if only to be very clear about what you are seeking from a search firm. What criteria should you use to select one? Are you going with one who you already use, or have used, or are you open to new players (both to you and to the higher education market)? The latter might be relevant if you are seeking to extend your search to candidates who have a career trajectory beyond higher education.  

    ‘Listing’ – how and by whom?   

    Searching should lead to many potential candidates Selecting who to consider is typically undertaken through a long-listing process and from this a short-list is created. Make sure you understand how this will be undertaken and who will be doing it. When was the last time you asked to review the larger list from which the long list was taken?  

    Psychometrics – why, which and how? 

    A related matter involves the use of any psychometric instruments proposed to form part of the selection process. They are often included –yet the rationale for this is often unclear. As is the question of how the data will be used. Equally importantly, if the judgment is that it should be included, who should undertake the process? Whichever route you take, you would be wise to read Andrew Munro’s recent book on the topic, Personality Testing In Employee Selection: Challenges, Controversies and Future Directions 

    Balance questions with scenarios and dilemmas 

    Given the complexity of the role of the Vice Chancellor, it is clearly important to assess candidates across a wide range of criteria. Whilst a question-and-answer process can elicit some evidence, we should all be aware of the limitations of such a process. Complementing the former with a well-considered scenario-based processes involving a series of dilemmas, which candidates are invited to consider, is less common than it should be. 

    Rehearse final decision scenarios  

    If you are fortunate as a selection panel, after having considered many different sources of evidence, you will reach a collective, unanimous decision about the candidate you wish to offer the position. Job almost done. More likely, however, you will have more than one preferred candidate – each providing evidence to be appointable albeit with evidence of gaps in some areas. Occasionally, you may also have reached an impasse where strong cases are made to appoint two equally appointable candidates. Preparing for these situations by considering them in advance. In some cases, the first time such situations are considered are during the final stage of the selection exercise. 

    In part 2 I’ll focus more on support and how to ensure the leadership transition is given as much attention as candidate selection. 

    Source link

  • Future-Proof Students’ (and Our) Careers by Building Uniquely Human Capacities – Faculty Focus

    Future-Proof Students’ (and Our) Careers by Building Uniquely Human Capacities – Faculty Focus

    Source link

  • From improvement to compliance – a significant shift in the purpose of the TEF

    From improvement to compliance – a significant shift in the purpose of the TEF

    The Teaching Excellence Framework has always had multiple aims.

    It was partly intended to rebalance institutional focus from research towards teaching and student experience. Jo Johnson, the minister who implemented it, saw it as a means of increasing undergraduate teaching resources in line with inflation.

    Dame Shirley Pearce prioritised enhancing quality in her excellent review of TEF implementation. And there have been other purposes of the TEF: a device to support regulatory interventions where quality fell below required thresholds, and as a resource for student choice.

    And none of this should ignore its enthusiastic adoption by student recruitment teams as a marketing tool.

    As former Chair and Deputy Chair of the TEF, we are perhaps more aware than most of these competing purposes, and more experienced in understanding how regulators, institutions and assessors have navigated the complexity of TEF implementation. The TEF has had its critics – something else we are keenly aware of – but it has had a marked impact.

    Its benchmarked indicator sets have driven a data-informed and strategic approach to institutional improvement. Its concern with disparities for underrepresented groups has raised the profile of equity in institutional education strategies. Its whole institution sweep has made institutions alert to the consequences of poorly targeted education strategies and prioritised improvement goals. Now, the publication of the OfS’s consultation paper on the future of the TEF is an opportunity to reflect on how the TEF is changing and what it means for the regulatory and quality framework in England.

    A shift in purpose

    The consultation proposes that the TEF becomes part of what the OfS sees as a more integrated quality system. All registered providers will face TEF assessments, with no exemptions for small providers. Given the number of new providers seeking OfS registration, it is likely that the number to be assessed will be considerably larger than the 227 institutions in the 2023 TEF.

    Partly because of the larger number of assessments to be undertaken, TEF will move to a rolling cycle, with a pool of assessors. Institutions will still be awarded three grades – one for outcomes, one for experience and one overall, but their overall grade will simply be the lower of the two other grades. The real impact of this will be on Bronze-rated providers who could find themselves subject to a range of measures, potentially including student number controls or fee constraints, until they show improvement.

    The OfS consultation paper marks a significant shift in the purpose of the TEF, from quality enhancement to regulation and from improvement to compliance. The most significant changes are at the lower end of assessed performance. The consultation paper makes sensible changes to aspects of the TEF which always posed challenges for assessors and regulators, tidying up the relationship between the threshold B3 standards and the lowest TEF grades. It correctly separates measures of institutional performance on continuation and completion – over which institutions have more direct influence – from progression to employment – over which institutions have less influence.

    Pressure points

    But it does this at some heavy costs. By treating the Bronze grade as a measure of performance at, rather than above, threshold quality, it will produce just two grades above the threshold. In shifting the focus towards quantitative indicators and away from institutional discussion of context, it will make TEF life more difficult for further education institutions and institutions in locations with challenging graduate labour markets. The replacement of the student submission with student focus groups may allow more depth on some issues, but comes at the expense of breadth, and the student voice is, disappointingly, weakened.

    There are further losses as the regulatory purpose is embedded. The most significant is the move away from educational gain, and this is a real loss: following TEF 2023, almost all institutions were developing their approaches to and evaluation of educational gain, and we have seen many examples where this was shaping fruitful approaches to articulating institutional goals and the way they shape educational provision.

    Educational gain is an area in which institutions were increasingly thinking about distinctiveness and how it informs student experience. It is a real loss to see it go, and it will weaken the power of many education strategies. It is almost certainly the case that the ideas of educational gain and distinctiveness are going to be required for confident performance at the highest levels of achievement, but it is a real pity that it is less explicit. Educational gain can drive distinctiveness, and distinctiveness can drive quality.

    Two sorts of institutions will face the most significant challenges. The first, obviously, are providers rated Bronze in 2023, or Silver-rated providers whose indicators are on a downward trajectory. Eleven universities were given a Bronze rating overall in the last TEF exercise – and 21 received Bronze either for the student experience or student outcomes aspects. Of the 21, only three Bronzes were for student outcomes, but under the OfS plans, all would be graded Bronze, since any institution would be given its lowest aspect grade as its overall grade. Under the proposals, Bronze-graded institutions will need to address concerns rapidly to mitigate impacts on growth plans, funding, prestige and competitive position.

    The second group facing significant challenges will be those in difficult local and regional labour markets. Of the 18 institutions with Bronze in one of the two aspects of TEF 2023, only three were graded bronze for student outcomes, whereas 15 were for student experience. Arguably this was to be expected when only two of the six features of student outcomes had associated indicators: continuation/completion and progression.

    In other words, if indicators were substantially below benchmark, there were opportunities to show how outcomes were supported and educational gain was developed. Under the new proposals, the approach to assessing student outcomes is largely, if not exclusively, indicator-based, for continuation and completion. The approach is likely to reinforce differences between institutions, and especially those with intakes from underrepresented populations.

    The stakes

    The new TEF will play out in different ways in different parts of the sector. The regulatory focus will increase pressure on some institutions, whilst appearing to relieve it in others. For those institutions operating at 2023 Bronze levels or where 2023 Silver performance is declining, the negative consequences of a poor performance in the new TEF, which may include student number controls, will loom large in institutional strategy. The stakes are now higher for these institutions.

    On the other hand, institutions whose graduate employment and earnings outcomes are strong, are likely to feel more relieved, though careful reading of the grade specifications for higher performance suggests that there is work to be done on education strategies in even the best-performing 2023 institutions.

    In public policy, lifting the floor – by addressing regulatory compliance – and raising the ceiling – by promoting improvement – at the same time is always difficult, but the OfS consultation seems to have landed decisively on the side of compliance rather than improvement.

    Source link

  • Inquiry calls for vice-chancellor pay caps – Campus Review

    Inquiry calls for vice-chancellor pay caps – Campus Review

    A senate inquiry has recommended Australian universities cap remuneration for vice-chancellors and senior executives, finding they are rewarded too generously compared to other staff and international peers.

    Please login below to view content or subscribe now.

    Membership Login

    Source link

  • UTS can’t blame policy for cuts: Minister – Campus Review

    UTS can’t blame policy for cuts: Minister – Campus Review

    The University of Technology Sydney (UTS) has been met with widespread criticism from the federal and NSW governments for its plan to cut 1100 subjects including its entire teacher education program.

    Please login below to view content or subscribe now.

    Membership Login

    Source link

  • Students score universities on experience – Campus Review

    Students score universities on experience – Campus Review

    Three private universities offer the best student experience out of all Australian institutions according to the latest student experience survey, with the University of Divinity ranked number one overall.

    Please login below to view content or subscribe now.

    Membership Login

    Source link

  • OfS’ understanding of the student interest requires improvement

    OfS’ understanding of the student interest requires improvement

    When the Office for Students’ (OfS) proposals for a new quality assessment system for England appeared in the inbox, I happened to be on a lunchbreak from delivering training at a students’ union.

    My own jaw had hit the floor several times during my initial skim of its 101 pages – and so to test the validity of my initial reactions, I attempted to explain, in good faith, the emerging system to the student leaders who had reappeared for the afternoon.

    Having explained that the regulator was hoping to provide students with a “clear view of the quality of teaching and learning” at the university, their first confusion was tied up in the idea that this was even possible in a university with 25,000 students and hundreds of degree courses.

    They’d assumed that some sort of dashboard might be produced that would help students differentiate between at least departments if not courses. When I explained that the “view” would largely be in the form of a single “medal” of Gold, Silver, Bronze or Requires improvement for the whole university, I was met with confusion.

    We’d spent some time before the break discussing the postgraduate student experience – including poor induction for international students, the lack of a policy on supervision for PGTs, and the isolation that PGRs had fed into the SU’s strategy exercise.

    When I explained that OfS was planning to introduce a PGT NSS in 2028 and then use that data in the TEF from 2030-31 – such that their university might not have the data taken into account until 2032-33 – I was met with derision. When I explained that PGRs may be incorporated from 2030–31 onwards, I was met with scorn.

    Keen to know how students might feed in, one officer asked how their views would be taken into account. I explained that as well as the NSS, the SU would have the option to create a written submission to provide contextual insight into the numbers. When one of them observed that “being honest in that will be a challenge given student numbers are falling and so is the SU’s funding”, the union’s voice coordinator (who’d been involved in the 2023 exercise) in the corner offered a wry smile.

    One of the officers – who’d had a rewarding time at the university pretty much despite their actual course – wanted to know if the system was going to tackle students like them not really feeling like they’d learned anything during their degree. Given the proposals’ intention to drop educational gain altogether, I moved on at this point. Young people have had enough of being let down.

    I’m not at home in my own home

    Back in February, you might recall that OfS published a summary of a programme of polling and focus groups that it had undertaken to understand what students wanted and needed from their higher education – and the extent to which they were getting it.

    At roughly the same time, it published proposals for a new initial Condition C5: Treating students fairly, to apply initially to newly registered providers, which drew on that research.

    As well as issues it had identified with things like contractual provisions, hidden costs and withdrawn offers, it was particularly concerned with the risk that students may take a decision about what and where to study based on false, misleading or exaggerated information.

    OfS’ own research into the Teaching Excellence Framework 2023 signals one of the culprits for that misleading. Polling by Savanta in April and May 2024, and follow-up focus groups with prospective undergraduates over the summer both showed that applicants consistently described TEF outcomes as too broad to be of real use for their specific course decisions.

    They wanted clarity about employability rates, continuation statistics, and job placements – but what they got instead was a single provider-wide badge. Many struggled to see meaningful differences between Gold and Silver, or to reconcile how radically different providers could both hold Gold.

    The evidence also showed that while a Gold award could reassure applicants, more than one in five students aware of their provider’s TEF rating disagreed that it was a fair reflection of their own experience. That credibility gap matters.

    If the TEF continues to offer a single label for an entire university, with data that are both dated and aggregated, there is a clear danger that students will once again be misled – this time not by hidden costs or unfair contracts, but by the regulatory tool that is supposed to help them make informed choices.

    You don’t know what I’m feeling

    Absolutely central to the TEF will remain results of the National Student Survey (NSS).

    OfS says that’s because “the NSS remains the only consistently collected, UK-wide dataset that directly captures students’ views on their teaching, learning, and academic support,” and because “its long-running use provides reliable benchmarked data which allows for meaningful comparison across providers and trends over time.”

    It stresses that the survey provides an important “direct line to student perceptions,” which balances outcomes data and adds depth to panel judgements. In other words, the NSS is positioned as an indispensable barometer of student experience in a system that otherwise leans heavily on outcomes.

    But set aside the fact that it surveys only those who make it to the final year of a full undergraduate degree. The NSS doesn’t ask whether students felt their course content was up to date with current scholarship and professional practice, or whether learning outcomes were coherent and built systematically across modules and years — both central expectations under B1 (Academic experience).

    It doesn’t check whether students received targeted support to close knowledge or skills gaps, or whether they were given clear help to avoid academic misconduct through essay planning, referencing, and understanding rules – requirements spelled out in the guidance to B2 (Resources, support and engagement). It also misses whether students were confident that staff were able to teach effectively online, and whether the learning environment – including hardware, software, internet reliability, and access to study spaces – actually enabled them to learn. Again, explicit in B2, but invisible in the survey.

    On assessment, the NSS asks about clarity, fairness, and usefulness of feedback, but it doesn’t cover whether assessment methods really tested what students had been taught, whether tasks felt valid for measuring the intended outcomes, or whether students believed their assessments prepared them for professional standards. Yet B4 (Assessment and awards) requires assessments to be valid and reliable, moderated, and robust against misconduct – areas NSS perceptions can’t evidence.

    I could go on. The survey provides snapshots of the learning experience but leaves out important perception checks on the coherence, currency, integrity, and fitness-for-purpose of teaching and learning, which the B conditions (and students) expect providers to secure.

    And crucially, OfS has chosen not to use the NSS questions on organisation and management in the future TEF at all. That’s despite its own 2025 press release highlighting it as one of the weakest-performing themes in the sector – just 78.5 per cent of students responded positively – and pointing out that disabled students in particular reported significantly worse experiences than their peers.

    OfS said then that “institutions across the sector could be doing more to ensure disabled students are getting the high quality higher education experience they are entitled to,” and noted that the gap between disabled and non-disabled students was growing in organisation and management. In other words, not only is the NSS not fit for purpose, OfS’ intended use of it isn’t either.

    I followed the voice, you gave to me

    In the 2023 iteration of the TEF, the independent student submission was supposed to be one of the most exciting innovations. It was billed as a crucial opportunity for providers’ students to tell their own story – not mediated through NSS data or provider spin, but directly and independently. In OfS’ words, the student submission provided “additional insights” that would strengthen the panel’s ability to judge whether teaching and learning really were excellent.

    In this consultation, OfS says it wants to “retain the option of student input,” but with tweaks. The headline change is that the student submission would no longer need to cover “student outcomes” – an area that SUs often struggled with given the technicalities of data and the lack of obvious levers for student involvement.

    On the surface, that looks like a kindness – but scratch beneath the surface, and it’s a red flag. Part of the point of Condition B2.2b is that providers must take all reasonable steps to ensure effective engagement with each cohort of students so that “those students succeed in and beyond higher education.”

    If students’ unions feel unable to comment on how the wider student experience enables (or obstructs) student success and progression, that’s not a reason to delete it from the student submission. It’s a sign that something is wrong with the way providers involve students in what’s done to understand and shape outcomes.

    The trouble is that the light touch response ignores the depth of feedback it has already commissioned and received. Both the IFF evaluation of TEF 2023 and OfS’ own survey of student contacts documented the serious problems that student reps and students’ unions faced.

    They said the submission window was far too short – dropping guidance in October, demanding a January deadline, colliding with elections, holidays, and strikes. They said the guidance was late, vague, inaccessible, and offered no examples. They said the template was too broad to be useful. They said the burden on small and under-resourced SUs was overwhelming, and even large ones had to divert staff time away from core activity.

    They described barriers to data access – patchy dashboards, GDPR excuses, lack of analytical support. They noted that almost a third didn’t feel fully free to say what they wanted, with some monitored by staff while writing. And they told OfS that the short, high-stakes process created self-censorship, strained relationships, and duplication without impact.

    The consultation documents brush most of that aside. Little in the proposals tackles the resourcing, timing, independence, or data access problems that students actually raised.

    I’m not at home in my own home

    OfS also proposes to commission “alternative forms of evidence” – like focus groups or online meetings – where students aren’t able to produce a written submission. The regulator’s claim is that this will reduce burden, increase consistency, and make it easier to secure independent student views.

    The focus group idea is especially odd. Student representatives’ main complaint wasn’t that they couldn’t find the words – it was that they lacked the time, resource, support, and independence to tell the truth. Running a one-off OfS focus group with a handful of students doesn’t solve that. It actively sidesteps the standard in B2 and the DAPs rules on embedding students in governance and representation structures.

    If a student body struggles to marshal the evidence and write the submission, the answer should be to ask whether the provider is genuinely complying with the regulatory conditions on student engagement. Farming the job out to OfS-run focus groups allows providers with weak student partnership arrangements to escape scrutiny – precisely the opposite of what the student submission was designed to do.

    The point is that the quality of a student submission is not just a “nice to have” extra insight for the TEF panel. It is, in itself, evidence of whether a provider is complying with Condition B2. It requires providers to take all reasonable steps to ensure effective engagement with each cohort of students, and says students should make an effective contribution to academic governance.

    If students can’t access data, don’t have the collective capacity to contribute, or are cowed into self-censorship, that is not just a TEF design flaw – it is B2 evidence of non-compliance. The fact that OfS has never linked student submission struggles to B2 is bizarre. Instead of drawing on the submissions as intelligence about engagement, the regulator has treated them as optional extras.

    The refusal to make that link is even stranger when compared to what came before. Under the old QAA Institutional Review process, the student written submission was long-established, resourced, and formative. SUs had months to prepare, could share drafts, and had the time and support to work with managers on solutions before a review team arrived. It meant students could be honest without the immediate risk of reputational harm, and providers had a chance to act before being judged.

    TEF 2023 was summative from the start, rushed and high-stakes, with no requirement on providers to demonstrate they had acted on feedback. The QAA model was designed with SUs and built around partnership – the TEF model was imposed by OfS and designed around panel efficiency. OfS has learned little from the feedback from those who submitted.

    But now I’ve gotta find my own

    While I’m on the subject of learning, we should finally consider how far the proposals have drifted from the lessons of Dame Shirley Pearce’s review. Back in 2019, her panel made a point of recording what students had said loud and clear – the lack of learning gain in TEF was a fundamental flaw.

    In fact, educational gain was the single most commonly requested addition to the framework, championed by students and their representatives who argued that without it, TEF risked reducing success to continuation and jobs.

    Students told the review they wanted a system that showed whether higher education was really developing their knowledge, skills, and personal growth. They wanted recognition of the confidence, resilience, and intellectual development that are as much the point of university as a payslip.

    Pearce’s panel agreed, recommending that Educational Gains should become a fourth formal aspect of TEF, encompassing both academic achievement and personal development. Crucially, the absence of a perfect national measure was not seen as a reason to ignore the issue. Providers, the panel said, should articulate their own ambitions and evidence of gain, in line with their mission, because failing to even try left a gaping hole at the heart of quality assessment.

    Fast forward to now, and OfS is proposing to abandon the concept entirely. To students and SUs who have been told for years that their views shape regulation, the move is a slap in the face. A regulator that once promised to capture the full richness of the student experience is now narrowing the lens to what can be benchmarked in spreadsheets. The result is a framework that tells students almost nothing about what they most want to know – whether their education will help them grow.

    You see the same lack of learning in the handling of extracurricular and co-curricular activity. For students, societies, volunteering, placements, and cocurricular opportunities are not optional extras but integral to how they build belonging, develop skills, and prepare for life beyond university. Access to these opportunities feature heavily in the Access and Participation Risk Register precisely because they matter to student success and because they’re a part of the educational offer in and of themselves.

    But in TEF 2023 OfS tied itself in knots over whether they “count” — at times allowing them in if narrowly framed as “educational”, at other times excluding them altogether. To students who know how much they learn outside of the lecture theatre, the distinction looked absurd. Now the killing off of educational gain excludes them all together.

    You should have listened

    Taken together, OfS has delivered a masterclass in demonstrating how little it has learned from students. As a result, the body that once promised to put student voice at the centre of regulation is in danger of constructing a TEF that is both incomplete and actively misleading.

    It’s a running theme – more evidence that OfS is not interested enough in genuinely empowering students. If students don’t know what they can, should, or could expect from their education – because the standards are vague, the metrics are aggregated, and the judgements are opaque – then their representatives won’t know either. And if their reps don’t know, their students’ union can’t effectively advocate for change.

    When the only judgements against standards that OfS is interested in come from OfS itself, delivered through a very narrow funnel of risk-based regulation, that funnel inevitably gets choked off through appeals to “reduced burden” and aggregated medals that tell students nothing meaningful about their actual course or experience. The result is a system that talks about student voice while systematically disempowering the very students it claims to serve.

    In the consultation, OfS says that it wants its new quality system to be recognised as compliant with the European Standards and Guidelines (ESG), which would in time allow it to seek membership of the European Quality Assurance Register (EQAR). That’s important for providers with international partnerships and recruitment ambitions, and for students given that ESG recognition underpins trust, mobility, and recognition across the European Higher Education Area.

    But OfS’ conditions don’t require co-design of the quality assurance framework itself, nor proof that student views shape outcomes. Its proposals expand student assessor roles in the TEF, but don’t guarantee systematic involvement in all external reviews or transparency of outcomes – both central to ESG. And as the ongoing QA-FIT project and ESU have argued, the next revision of the ESG is likely to push student engagement further, emphasising co-creation, culture, and demonstrable impact.

    If it does apply for EQAR recognition, our European peers will surely notice what English students already know – the gap between OfS’ rhetoric on student partnership and the reality of its actual understanding and actions is becoming impossible to ignore.

    When I told those student officers back on campus that their university would be spending £25,000 of their student fee income every time it has to take part in the exercise, their anger was palpable. When I added that according to the new OfS chair, Silver and Gold might enable higher fees, while Bronze or “Requires Improvement” might cap or further reduce their student numbers, they didn’t actually believe me.

    The student interest? Hardly.

    Source link

  • An assessor’s perspective on the Office for Students’ TEF shake-up

    An assessor’s perspective on the Office for Students’ TEF shake-up

    Across the higher education sector in England some have been waiting with bated breath for details of the proposed new Teaching Excellence Framework. Even amidst the multilayered preparations for a new academic year – the planning to induct new students, to teach well and assess effectively, to create a welcoming environment for all – those responsible for education quality have had one eye firmly on the new TEF.

    The OfS has now published its proposals along with an invitation to the whole sector to provide feedback on them by 11 December 2025. As an external adviser for some very different types of provider, I’m already hearing a kaleidoscope of changing questions from colleagues. When will our institution or organisation next be assessed if the new TEF is to run on a rolling programme rather than in the same year for everyone? How will the approach to assessing us change now that basic quality requirements are included alongside the assessment of educational ‘excellence’? What should we be doing right now to prepare?

    Smaller providers, including further education colleges that offer some higher education programmes, have not previously been required to participate in the TEF assessment. They will now all need to take part, so have a still wider range of questions about the whole process. How onerous will it be? How will data about our educational provision, both quantitative and qualitative, be gathered and assessed? What form will our written submission to the OfS need to take? How will judgements be made?

    As a member of TEF assessment panels through TEF’s entire lifecycle to date, I’ve read the proposals with great interest. From an assessor’s point of view, I’ve pondered on how the assessment process will change. Will the new shape of TEF complicate or help streamline the assessment process so that ratings can be fairly awarded for providers of every mission, shape and size?

    Panel focus

    TEF panels have always comprised experts from the whole sector, including academics, professional staff and student representatives. We have looked at the evidence of “teaching excellence” (I think of it as good education) from each provider very carefully. It makes sense that the two main areas of assessment, or “aspects” – student experience and student outcomes – will continue to be discrete areas of focus, leading to two separate ratings of either Gold, Silver, Bronze or Requires Improvement. That’s because the data for each of these can differ quite markedly within a single provider, so it can mislead students to conflate the two judgements.

    Diagram from page 18 of the consultation document

    Another positive continuity is the retention of both quantitative and qualitative evidence. Quantitative data include the detailed datasets provided by OfS, benchmarked against the sector. These are extremely helpful to assessors who can compare the experiences and outcomes of students from different demographics across the full range of providers.

    Qualitative data have previously come from 25-page written submissions from each provider, and from written student submissions. There are planned changes afoot for both of these forms of evidence, but they will still remain crucial.

    The written provider submissions may be shorter next time. Arguably there is a risk here, as submissions have always enabled assessors to contextualise the larger datasets. Each provider has its own story of setting out to make strategic improvements to their educational provision, and the submissions include both qualitative narrative and internally produced quantitative datasets related to the assessment criteria, or indicators.

    However, it’s reasonable for future submissions to be shorter as the student outcomes aspect will rely upon a more nuanced range of data relating to study outcomes as well as progression post-study (proposal 7). While it’s not yet clear what the full range of data will be, this approach is potentially helpful to assessors and to the sector, as students’ backgrounds, subject fields, locations and career plans vary greatly and these data take account of those differences.

    The greater focus on improved datasets suggests that there will be less reliance on additional information, previously provided at some length, on how students’ outcomes are being supported. The proof of the pudding for how well students continue with, complete and progress from their studies is in the eating, or rather in the outcomes themselves, rather than the recipes. Outcomes criteria should be clearer in the next TEF in this sense, and more easily applied with consistency.

    Another proposed change focuses on how evidence might be more helpfully elicited from students and their representatives (proposal 10). In the last TEF students were invited to submit written evidence, and some student submissions were extremely useful to assessors, focusing on the key criteria and giving a rounded picture of local improvements and areas for development. For understandable reasons, though, students of some providers did not, or could not, make a submission; the huge variations in provider size means that in some contexts students do not have the capacity or opportunity to write up their collective experiences. This variation was challenging for assessors, and anything that can be done to level the playing field for students’ voices next time will be welcomed.

    Towards the data limits

    Perhaps the greatest challenge for TEF assessors in previous rounds arose when we were faced with a provider with very limited data. OfS’s proposal 9 sets out to address this by varying the assessment approach accordingly. Where these is no statistical confidence in a provider’s NSS data (or no NSS data at all), direct evidence of students’ experiences with that provider will be sought, and where there is insufficient statistical confidence in a provider’s student outcomes, no rating will be awarded for that aspect.

    The proposed new approach to the outcomes rating makes great sense – it is so important to avoid reaching for a rating which is not supported by clear evidence. The plan to fill any NSS gap with more direct evidence from students is also logical, although it could run into practical challenges. It will be useful to see suggestions from the sector about how this might be achieved within differing local contexts.

    Finally, how might assessment panels be affected by changes to what we are assessing, and the criteria for awarding ratings? First, both aspects will incorporate the requirements of OfS’s B conditions – general ongoing, fundamental conditions of registration. The student experience aspect will now be aligned with B1 (course content and delivery), B2 (resources, academic support and student engagement) and part of B4 (effective assessment). Similarly, the student outcomes B condition will be embedded into the outcomes aspect of the new TEF. This should make even clearer to assessors what is being assessed, where the baseline is and what sits above that line as excellent or outstanding.

    And this in turn should make agreeing upon ratings more straightforward. It was not always clear in the previous TEF round where the lines between Requires Improvement and even meeting basic requirements for the sector should be drawn. This applied only to the very small number of providers whose provision did not appear, to put it plainly, to be good enough.

    But more clarity in the next round about the connection between baseline requirements should aid assessment processes. Clarification that in the future a Bronze award signifies “meeting the minimum quality requirements” is also welcome. Although the sector will need time to adjust to this change, it is in line with the risk-based approach OfS wants to take to the quality system overall.

    The £25,000 question

    Underlying all of the questions being asked by providers now is a fundamental one: How we will do next time?

    Looking at the proposals with my assessor’s hat on, I can’t predict what will happen for individual providers, but it does seem that the evolved approach to awarding ratings should be more transparent and more consistent. Providers need to continue to understand their education-related own data, both quantitative and qualitative, and commit to a whole institutional approach to embedding improvements, working in close partnership with students.

    Assessment panels will continue to take their roles very seriously, to engage fully with agreed criteria, and do everything we can to make a positive contribution to encouraging, recognising and rewarding teaching excellence in higher education.

    Source link