Tag: Quality

  • Effective finance governance is about balancing high quality data with managing existential uncertainty

    Effective finance governance is about balancing high quality data with managing existential uncertainty

    Higher education institution finances are not like the finances of other organisations, in the strange blend of commercial imperative and charitable purpose.

    A big portion of their revenue is driven by loss-making activity in research and programmes that lose money; their surplus-driving activity in international recruitment is hyper-competitive; and they have a cost base in salaries, pensions, and infrastructure that are influenced by factors outside their direct control.

    The current moment of financial pressure on higher education has tightened focus on the governance of university finances, with concerns expressed by the Department for Education and the Office for Students in the English context, and particular scrutiny from government and regulators in Scotland in light of the financial crisis at the University of Dundee last year.

    To the extent that governments set the terms of the higher education funding settlement it is perhaps unreasonable to lay blame for any given higher education institution’s financial struggles at the feet of the board of governors or university leadership. But even with this caveat, the realities of the current moment call for well-managed internal financial governance and robust scrutiny and challenge of the executive’s plans from governing bodies.

    None of this is straightforward – the structures and cultures of higher education require a level of negotiation between academic priorities, external policy drivers, and organisational sustainability. Commercial acumen must be balanced with consciousness of the social mission and the rewards offered by short-term opportunities set against the responsibility to steward organisations that play a critical role in the national wellbeing for the long term.

    Together with TechnologyOne, we recently convened a private round table discussion among a group of COOs and financial directors, representing a diverse range of higher education institutions. We wanted to explore how these pressures are manifesting as emerging priorities for governance, and the nature of those priorities for finance leaders.

    Board cultures and capabilities

    One participant wryly observed that not every board member may have a full understanding of the scale of the challenges facing the sector as a whole, and their institution in particular, at the point of taking up their role, and their first exposure to the financial realities can sometimes be shocking. Commercial experience and acumen are much in demand on boards in financially challenging times, but that commercial awareness has to be deployed in the service of financial sustainability – and the definition of “sustainability” can be something of a moving target, especially when the future is uncertain.

    Attendees shared several examples of the kind of tensions around financial decision-making boards have to work through: between the cash demands of the next 18 months and the longer-term investments that will ensure the institution is still able to achieve its mission five years or a decade into the future; or between stockpiling reserves to guard against future risks versus delivering mission-led activity.

    There can be no right answer to these questions, and ultimately it is for the leadership of the institution to be accountable for these kinds of strategic choices. It is not that board members don’t understand the financial fundamentals, but that, attendees reflected, the nature of the trade-offs and the implications of some decisions may not be fully taken account of as the discussion unfolds. Financial directors and CFOs can play a critical role in ensuring these board-level discussions are shaped constructively, through prior briefing with board and committee chairs, and through being brought into the discussion as appropriate.

    Risk, risk appetite and forecasting

    Boards are, in light of ongoing public discussion about the risk of institutional financial crisis or even insolvency, naturally concerned about avoiding being the next institution to hit the headlines as facing serious financial challenge. Paradoxically, there was also a sense that this driving concern can lead to risk averse behaviours that are not always in the best interest of the organisation, such as conserving cash that could be used for surplus generating activity, or looking at revenue raising independently from the costs implied in raising revenue – the gap between the revenue and real cost of undertaking research being a classic example.

    One area to improve is understanding of risks, and risk appetite. Boards can, broadly, be appraised of risk and particularly financial risk. However, they can be less fluent in considering the risk they are willing to endure in order to solve some of their underlying challenges, or the relationship between risk and opportunity. For example, boards may see an inherent risk in their cash flow position. They often lean toward conserving cash (a low risk appetite) but this may actually worsen their cash position if they do not look at revenue generation (a more risky proposition.) At the other end of the spectrum boards may be tempted to pursue opportunities to raise revenue that do not contribute to, or distract from, the wider organisational mission and strategic objectives.

    Dealing with uncertainty is never easy, and there was a lively discussion about the role and purpose of financial forecasting, with one attendee pointing out that the idea of creating a five year financial forecast in a sector that is changing so rapidly is “a bit of a nonsense” with another observing “the only thing we know when we’re putting together our forecast is that it’s wrong.”

    It was noted that some boards spend very little time on the forecast and it was suggested that this was an area for greater focus, not to attempt to accurately predict the unpredictable but to socialise discussion about the nature of the uncertainties and their implications. One attendee argued that the point of the forecast is not in the accuracy of the numbers but that there are agreed actions following from the forecast – “we know what we’re going to do as a result.” Another suggested that the Office for Students could potentially offer some additional insight into what it expects to see in the financial returns at the point of preparing those returns, rather than raising concerns after the fact.

    Data and systems

    The institutional systems that bring together disparate financial systems into a single picture are of varying quality. Sometimes, universities are dependent on an amalgamation of systems, spreadsheets, and other data sources, that involve a degree of manual reconciliation. Inevitably, the more systems that exist and the more people who input the more room there is for disagreement and error. Even the most sophisticated systems that include automation and checks are only as accurate as the information provided to them.

    The accuracy and clarity of financial information matters enormously. Without it it becomes impossible to know where the gaps are in terms of income and costs. Managers and budget-holders cannot understand their own situation and it becomes much harder to present a clear picture to executive teams and from there, to boards. A key “ask” of financial management systems was to integrate with other data sources in ways that allow the presentation of financial information to be legible and allow for a clear story to emerge.

    Attendees at the round table reported a number of areas of focus in tightening up internal financial management and visibility of financial information. One critical area of focus was in improving general financial literacy across the organisation, so that institutional staff could understand their institution’s financial circumstances in more detail. Institutional sustainability is everybody’s problem, not just the finance team’s.

    In reporting to board, attendees were working on shortening and clarifying papers, providing more contextual information, and making greater use of visual aids and diagrams, with one attendee noting “the quality of management reports is an enabler of good governance.”

    In times of financial pressure and challenge, the quality of financial decision-making is ever more intimately tied to the quality of financial information. Budget holders, finance teams, executive teams, and boards all need to be able to assess the current state of things and plan for the future, despite its uncertainties.

    Effective governance in this context doesn’t mean fundamentally changing the management processes or governors departing from their traditional role of scrutiny and accountability, but it does mean engaging in an ongoing process of improving basic financial processes and management information – while at the same time embedding a culture of constructive discussion about the overall financial position across the whole institution.

    This article is published as part of a partnership with TechnologyOne, focused on effective financial governance. Join Wonkhe and TechnologyOne on Thursday 29 January 12.00-1.00pm for a free webinar, Show them the money: exploring effective governance of university finances.

    Source link

  • Rethinking Lead Quality for Marketing-Admissions Alignment

    Rethinking Lead Quality for Marketing-Admissions Alignment

    Why Quality Beats Quantity in Student Recruitment

    Many institutions measure enrollment success by the size of their funnel. However, lead volume alone doesn’t translate into student enrollments, and in many cases, it creates more friction than results.

    When marketing teams are tasked with generating as many student leads as possible, admissions teams are often left to sift through a flood of prospects who were never the right fit. The result is wasted effort, strained teams, and disappointing yield. A smarter approach focuses on lead quality, not volume, and requires marketing and admissions to work together from the very beginning.

    The Risks of a Volume-Driven Mindset

    A volume-driven approach creates several hidden risks that undermine enrollment goals.

    First, marketing may deliver impressive lead numbers that admissions teams simply can’t convert. When success is defined by quantity alone, campaigns are optimized for clicks and form fills, not for intent or fit. Admissions counselors then spend valuable time chasing prospects who lack academic readiness, program alignment, or enrollment urgency.

    Second, high lead volume increases operational burden. Admissions teams are forced into reactive mode — managing inboxes, repeating outreach attempts, and documenting interactions that rarely progress. Over time, this erodes morale and reduces the attention given to the strongest applicants.

    Finally, institutions often spend more on advertising without improving outcomes. Larger budgets drive more traffic, but without stronger targeting and messaging, enrollment yield remains flat. This cycle reinforces siloed operations rather than solving for them.

    As explored in my recent article about why admissions and marketing collaboration matters, alignment across teams — not scale — is the real growth lever.

    How Discovery Shapes Lead Quality

    High-quality recruitment doesn’t start with campaigns — it starts with clarity. And clarity is the product of strong discovery paired with powerful and differentiated storytelling.

    Discovery is where marketing and admissions teams uncover what actually drives enrollment success: who thrives in the program, why they choose it, what doubts they need resolved, and what outcomes actually motivate action. Without this foundation, messaging tends to default to broad, generic claims that attract attention but fail to reach the right students.

    Strong brand strategies don’t try to appeal to everyone. They’re built around intentional differentiation and can clearly articulate who the institution is a right fit for, what it stands for, and what makes its experience distinct. This, in turn, creates deeper engagement that translates into more qualified prospects. 

    When institutional storytelling is rooted in discovery, messaging becomes more precise and authentic. Instead of overpromising or relying on broad aspirational language, marketing communicates real program strengths, expectations, and outcomes. This clarity acts as a filter. Prospective students who see themselves in the story lean in with higher intent, while those who are misaligned self-select out earlier in the funnel.

    For admissions teams, this translates into more productive conversations. Leads arrive with clearer expectations, stronger program fit, and greater readiness to move forward. 

    In short, discovery-led storytelling reduces friction across the funnel. Marketing attracts fewer but better-aligned prospects, admissions spends less time correcting misalignment, and institutions see stronger enrollment outcomes driven by relevance rather than volume.

    Building Marketing-Admissions Alignment

    True alignment requires more than good intentions. It demands shared definitions, shared metrics, and ongoing communication.

    Institutions must define key performance indicators (KPIs) that connect lead quality to enrollment outcomes — such as yield, time to application, and retention — rather than isolating marketing performance from admissions results. When teams agree on what “good” looks like, strategy becomes easier to execute.

    Messaging, targeting, and follow-up should also be aligned around program goals. Marketing sets expectations honestly and clearly; admissions reinforces those expectations through consistent conversations. Feedback loops allow teams to refine targeting and messaging based on real applicant behavior, not assumptions.

    This approach echoes the mindset shift outlined in my colleague Brian Messer’s recent article, which covered why institutions should stop chasing student leads and focus instead on sustainable enrollment strategies.

    Less Volume, More Conversions

    A smaller pipeline doesn’t mean weaker results. In fact, institutions that prioritize lead quality often see higher conversion rates, stronger retention, and less staff burnout.

    With fewer but better-aligned prospects, admissions teams can focus on meaningful engagement rather than time-consuming, low-yield outreach. Applicants receive clearer guidance, faster responses, and a more personalized experience. And marketing and admissions share accountability for outcomes rather than deflecting responsibility across teams.

    Key Takeaways

    • Lead quality drives stronger enrollment outcomes than raw volume.
    • Discovery is the foundation of high-quality recruitment and clearer positioning.
    • Collaboration between marketing and admissions reduces silos, increases efficiency, and improves yield.

    When marketers prioritize lead quality over lead volume, everyone wins. 

    Improve Lead Quality and Align Marketing and Admissions With Archer

    At Archer Education, we work with your marketing and admissions teams to build sustainable lead generation and enrollment strategies. Our approach focuses on establishing lasting capabilities so that your institution has the tools, training, and insights to operate with confidence. 

    Our enrollment marketing teams conduct deep discovery to inform your campaigns, while our admissions and retention teams provide personalized engagement support to prioritize student success.

    Contact us today to learn more. 

    Source link

  • China Aims for “Quality” Overseas Students With Entry Exam

    China Aims for “Quality” Overseas Students With Entry Exam

    China’s introduction of a standardized admissions exam for international students shows that efforts to build a world-class university system matter more to the country than increasing enrollments, according to experts.

    Beginning with the 2026 intake, most international applicants will be required to take the China Scholastic Competency Assessment (CSCA), a centrally designed test intended to benchmark students from different education systems against a common academic standard.

    The exam will be compulsory for recipients of Chinese government scholarships starting this year and later phased in more widely, becoming mandatory for all international undergraduate applicants by 2028.

    It will be delivered primarily as an online, remotely proctored test, with some countries also offering off-line test centers.

    Richard Coward, CEO at Global Admissions, an agency that helps international students apply to universities, said the policy was “one of the biggest changes” he had seen for international students studying in China.

    “This is more about the shift in focus away from quantity to quality, which is happening all over the world. Previously China had the target of 500,000 students; now the target is towards world-class universities by 2050 with the double first-class initiative.”

    “There is a great deal of variation in students with different academic backgrounds and it can be challenging to assess,” Coward said. “There are also many countries that don’t have the equivalent level of maths compared with China. This change aims to make all international applicants have the same standard so they’ll be able to follow the education at Chinese universities and so they are at least at the same level as local students.”

    Under the new framework, mathematics will be compulsory for all applicants, including those applying for arts and humanities degrees.

    Coward said this reflected “the Chinese educational philosophy that quantitative reasoning is a fundamental baseline for any university-level scholar.”

    Those applying to Chinese-taught programs must also sit for a “professional Chinese” paper, offered in humanities and STEM versions. Physics and chemistry are optional, depending on program requirements. Mathematics, physics and chemistry can be taken in either Chinese or English.

    Gerard Postiglione, professor emeritus at the University of Hong Kong, said the CSCA should be understood as part of a broader shift in China’s approach to internationalization.

    “The increasing narrative in China in all areas is to focus on quality,” he said. “That also means in higher education. If China has the plan by 2035 to become an education system that is globally influential, there’s going to be more emphasis on quality.”

    Postiglione added that the move also reflected how China approaches admissions locally.

    “If you look at how China selects students domestically, there is no back door,” he said, pointing to the importance of the gaokao, China’s national university admissions test taken by local students. “The gaokao is the gaokao, and I don’t think there will be much of a back door for international students, either.”

    He cautioned, however, that the framework may favor applicants with certain backgrounds.

    “Language proficiency and subject preparation will inevitably advantage some students over others,” he said. “Students who have already studied in Chinese, or who come from systems with stronger mathematics preparation, may find it easier to meet the requirements.”

    While the exam framework is centrally set, Postiglione said, individual universities are likely to retain autonomy over admissions decisions.

    “The Ministry of Education will provide a framework and guidelines,” he said, “but it would be very difficult for a central agency to make individual admissions decisions across the entire system.”

    Pass thresholds have not yet been standardized, and Coward said that in the future, universities may set minimum score requirements, but this is not in place yet.

    He added that the additional requirement was unlikely to reduce demand. “Some more casual students may be deterred,” he said. “But for top-tier universities, it reduces administrative burden by filtering for quality early.”

    In the longer term, though, “it signals that a Chinese degree is becoming more prestigious, which may actually increase demand from high-caliber students.”

    Source link

  • Defining quality is a thorny problem, but we shouldn’t shy away from the Government’s intention to make sure every student gets the best deal

    Defining quality is a thorny problem, but we shouldn’t shy away from the Government’s intention to make sure every student gets the best deal

    Join HEPI for a webinar on Thursday 11 December 2025 from 10am to 11am to discuss how universities can strengthen the student voice in governance to mark the launch of our upcoming report, Rethinking the Student Voice. Sign up now to hear our speakers explore the key questions.

    This blog is kindly authored by Meg Haskins, Policy Manager at the Russell Group.

    You can read HEPI’s other blog on the current OfS consultation here and here.

    Quality is one of the most frequently used, yet least clearly defined, concepts in higher education. For decades, debates have rumbled on about how best to measure it, and yet the term continues to be used liberally and often vaguely. From university marketing promising a “high-quality student experience” to political critiques of so-called “Mickey Mouse courses,” the term is everywhere – but its precise meaning remains elusive.

    Quality matters: to students making significant financial and personal investments; to staff who take pride in their teaching and research; to funders and policymakers; and to the UK’s global reputation. If we’re asking students to take out significant loans and trust that higher education will act as a springboard into their futures, we must not only deliver quality but also demonstrate it clearly, transparently and in ways that support ongoing improvement.

    The OfS consultation is the sector’s golden opportunity to define how this is done.

    The Russell Group supports a more integrated and streamlined quality assessment system – one that reduces duplication, improves clarity and actively supports efforts to enhance quality further. But integration must not come at the expense of flexibility within the model. The system needs to make space for narrative contextualisation rather than reductive judgements.

    Heavy reliance on benchmarking is particularly concerning. It risks disadvantaging institutions with a historically strong absolute performance and limiting meaningful differentiation. To ensure fairness, absolute values must carry greater weight, and there should be transparency on benchmark thresholds and definitions of “material” deviation, especially outcomes which will have regulatory and funding consequences.

    So far, ministers have been light on detail about what change they’re actually expecting to see on quality assurance. Ideas of linking quality measures to recruitment numbers or fee levels have caused concern, which is understandable given that the system for measuring quality is untested. But we shouldn’t fear greater scrutiny. Students, taxpayers and the public deserve clarity about what quality looks like in real terms – and reassurance that it is being delivered at a high level and consistently.

    Demonstrating quality is something Russell Group universities have always taken seriously, and is now under increasing public scrutiny in the face of rhetoric from certain political quarters about “rip-off degrees”. As such, our universities have taken steps to measure and robustly evidence the quality of our provision. Beyond regulatory metrics, graduate outcomes surveys, the TEF and professional body accreditations, our universities embed quality assurance through multiple levels of governance, including academic boards and senates, independent audits, annual and periodic module and programme reviews, and student feedback mechanisms. This has led to continuous improvement and enhancement of quality at our universities, reflected in the strength of their outcomes.

    Crucially, high quality is not about selectivity or league tables. The Secretary of State is rightly clear in her ambition for all young people to have a wide range of excellent options across different institutions, levels and qualification types. But this choice needs to go hand-in-hand with quality, which is why we need baseline expectations across all institutions and swift regulatory action where these standards aren’t met.

    If the sector embraces greater scrutiny in this way, then metrics must be robust, transparent and fair. Streamlining and clarifying processes should reduce duplication and burden, while maintaining a strong focus on enhancement.

    The regulator has both carrots and sticks at its disposal. While it is positive to see an intention to reward high-quality provision, benchmarking that obscures excellence could inadvertently punish those delivering the strongest outcomes – surely not the government’s intention.

    Particularly worrying is the idea that the OfS could start deriving overall ratings from a lower individual aspect rating. This compresses results and risks obscuring examples of high-quality provision, adding little value for students. Even more concerning is the proposal to reclassify the Bronze ratings as a trigger for regulatory intervention. This could redefine the baseline for compliance as a form of failure in quality, and blur the line between judgements of excellence and regulatory compliance – a muddled message for providers and confusing for students.

    Ultimately, the goal must be a more outward-facing quality model – one that strengthens public and ministerial trust, reinforces the UK’s global credibility, and upholds the reputation for excellence that underpins our higher education sector.

    By positioning higher tuition fees as one side of a “deal,” the Government is challenging the sector to demonstrate, clearly and confidently, that students are receiving both a high-quality experience and high-quality outcomes in return. That deal will only be credible if quality is defined fairly, measured transparently, and assessed in ways that support enhancement as well as accountability.

    Source link

  • TEF proposals’ radical reconfiguration of quality risk destabilising the sector – here’s the fix

    TEF proposals’ radical reconfiguration of quality risk destabilising the sector – here’s the fix

    The post-16 education and skills white paper reiterates what the Office for Students’ (OfS) recent consultation on the future of the Teaching Excellence Framework (TEF) had already made quite clear: there is a strong political will to introduce a regulatory framework for HE that imposes meaningful consequences on providers whose provision is judged as being of low quality.

    While there is much that could be said about the extent to which TEF is a valid way of measuring quality or teaching excellence, we will focus on the potential unintended consequences of OfS’s proposals for the future of TEF.

    Regardless of one’s views of the TEF in general, it is relatively uncontroversial to suggest that TEF 2023 was a material improvement on its predecessor. In an analysis of the outcomes from the 2017 TEF exercise, it was clear that a huge volume of work had gone into establishing a ranking of providers which was far too closely correlated with the characteristics of their student body.

    Speaking plainly, the optimal strategy for achieving Gold in 2017 was to avoid recruiting too many students from socially and economically disadvantaged backgrounds. In 2017, the 20 providers with the fewest FSM students had no Bronze awards, while the 20 with the highest failed to have any Gold awards associated with their provision.

    Following the changes introduced in the next round of TEF assessments, while there still appears to be a correlation between student characteristics and TEF outcomes, the relationship is not as strong as it was in 2017. Here we have mapped the distribution of TEF 2023 Gold, Silver and Bronze ratings for providers with the lowest (Table 1) and highest (Table 2) proportions of students who have received free school meals (FSM), for TEF 2023.

    In TEF 2023, the link between student characteristics and TEF outcome was less pronounced. This is a genuine improvement, and one we should ensure is not lost under the new proposals for TEF.

    Reconfiguring the conception of quality

    The current TEF consultation proposes radical changes, not least of which is the integration of the regulator’s assessment of compliance with the B conditions of registration which deal with academic quality.

    At present, TEF differentiates between different levels of quality that are all deemed to be above minimum standards – built upon the premise that the UK higher education sector is, on average, “very high quality” in an international context – and operates in parallel with the OfS’s approach to ensuring compliance with minimum standards. The proposal to merge these two aspects of regulation is being posited as a way of reducing regulatory burden.

    At the same time, the OfS – with strong ministerial support – is making clear that it wants to ensure there are regulatory consequences associated with provision that fails to meet their thresholds. And this is where things become more contentious.

    Under the current framework, a provider is technically not eligible to participate in TEF if it is judged by the OfS to fall foul of minimum quality expectations. Consequently, TEF ratings of Bronze, Silver and Gold are taken to correspond with High Quality, Very High Quality and Outstanding provision, respectively. While a fourth category, Requires Improvement, was introduced for 2023, vanishingly few providers were given this rating.

    Benchmarked data on the publicly available TEF dashboard in 2023 were deemed to contribute no more than 50 per cent of the weight in each provider’s aspect outcomes. Crucially, data that was broadly in line with benchmark was deemed – as a starting hypothesis, if you will – to be consistent with a Silver rating: again, reinforcing the message that the UK HE sector is “Very High Quality” on the international stage.

    Remember this, as we journey into the contrasts with proposals for the new TEF.

    Under the proposed reforms, OfS has signalled that providers failing to be of sufficient quality would be subject to regulatory consequences. Such consequences could span from enhanced monitoring to – in extremis – deregistration; such processes and penalties would be led by OfS. We have also received the clear indication that the government may wish to withdraw permission to grow and receive inflation-linked fee increases with quality outcomes. In other words, providers who fail to achieve a certain rating in TEF may experience student number caps and fee freezes.

    These are by no means minor inconveniences for any provider, and so one might reasonably expect that the threshold for implementing such penalties would be set rather high – from the perspectives both of the proportion of the sector that would, in a healthy system, be subject to regulatory action or governmental restriction at any one time, and the operational capacity of the OfS properly to follow through and follow up on the providers that require regulatory intervention. On the contrary, however, it is being proposed that both Requires Improvement- and Bronze-rated providers would be treated as inadequate in quality terms.

    While a provider rated as Requires Improvement might expect additional intervention from the regulator, it seems less obvious why a provider rated Bronze – which was previously defined as a High Quality provider – should expect to receive enhanced regulatory scrutiny and/or restrictions on their operation.

    It’s worse than we thought

    As the sector regulator, OfS absolutely ought to be working to identify areas of non-compliance and inadequate quality. The question is whether these new proposals achieve that aim.

    This proposal amounts to OfS making a fundamental change to the way it conceptualises the very notion of quality and teaching excellence, moving from a general assumption of high quality across the sector to the presumption that there is low quality at a scale hitherto unimagined. While the potential consequences of these proposed reforms are important at the level of an individual provider, and for student and prospective students’ perceptions, it is equally important to ask what they mean for the HE sector as a whole.

    Figure 1 illustrates the way in which the ratings of quality across our sector might change, should the current proposals be implemented. This first forecast is based upon the OfS’s proposal that overall provider ratings will be defined by the lowest of their two aspect ratings, and shows the profile of overall ratings in 2023 had this methodology been applied then.

    There are some important points to note regarding our methodology for generating this forecast. First, as we mentioned above, OfS has indicated an intention to base a provider’s overall rating on the lowest of the two assessed aspects: Student Experience and Student Outcomes. In TEF 2023, providers with mixed aspects, such as Bronze for one and Silver for another, may still have been judged as Silver overall, based on the TEF panel’s overall assessment of the evidence submitted. Under the new framework, this would not be possible, and such a provider would be rated Bronze by default. In addition, we are of course assuming that there has been no shift in metrics across the sector since the last TEF, and so these figures need to be taken as indicative and not definitive.

    Figure 1: Comparison of predicted future TEF outcomes compared with TEF 2023 actual outcomes

    There are two startling points to highlight:

    • The effect of this proposed TEF reform is to drive a downward shift in the apparent quality of English higher education, with a halving of the number of providers rated as Outstanding/Gold, and almost six times the number of providers rated as Requires Improvement.
    • The combined number of Bronze and Requires Improvement Providers would increase from 50 to 89. Taken together with the proposal to reframe Bronze as being of insufficient quality, OfS could be subjecting nearly 40 per cent of the sector to special regulatory measures.

    In short, the current proposals risk serious destabilisation of our sector, and we argue could end up making the very concept of quality in education less, not more, clear for students.

    Analysis by provider type

    Further analysis of this shift reveals that these changes would have an impact across all types of provider. Figures 2a and 2b show the distribution of TEF ratings for the 2023 and projected future TEF exercises, where we see high, medium and low tariff providers, as well as specialist institutions, equally impacted. For the 23 high tariff providers in particular, the changes would see four providers fall into the enhanced regulatory space of Bronze ratings, whereas none were rated less than Silver in the previous exercise. For specialist providers, of the current 42 with 2023 TEF ratings, five would be judged as Requires Improvement, whereas none received this rating in 2023.

    Figure 2a: Distribution of TEF 2023 ratings by provider type

    Figure 2b: Predicted distribution of future TEF ratings by provider type

    Such radical movement in OfS’s overall perception of quality in the sector requires explanation. Either the regulator believes that the current set of TEF ratings were overly generous and the sector is in far worse health than we have assumed (and, indeed, than we have been advising students via current TEF ratings), or else the very nature of what is considered to be high quality education has shifted so significantly that the way we rate providers requires fundamental reform. While the former seems very unlikely, the latter requires a far more robust explanation than has been provided in the current consultation.

    We choose to assume that OfS does not, in fact, believe that the quality of education in English HE has fallen off a cliff edge since 2023, and also that it is not intentionally seeking to radically redefine the concept of high quality education. Rather, in pursuit of a regulatory framework that does carry with it material consequences for failing to meet a robust set of minimum standards, we suggest that perhaps the current proposals have missed an opportunity to make more radical changes to the TEF rating system itself.

    We believe there is another approach that would help the OfS to deliver its intended aim, without destabilising the entire sector and triggering what would appear to be an unmanageable volume of regulatory interventions levelled at nearly 40 per cent of providers.

    Benchmarks, thresholds, and quality

    In all previous iterations of TEF, OfS has made clear that both metrics and wider evidence brought forward in provider and student submissions are key to arriving at judgements of student experience and outcomes. However, the use of metrics has very much been at the heart of the framework.

    Specifically, the OfS has gone to great lengths to provide metrics that allow providers to see how they perform against benchmarks that are tailored to their specific student cohorts. These benchmarks sit alongside the B3 minimum thresholds for key metrics, which OfS expects all providers to achieve. For the most part, providers eligible to enter TEF would have all metrics sitting above these thresholds, leaving the judgement of Gold, Silver and Bronze as a matter of the distance from the provider’s own benchmark.

    The methodology employed in TEF has also been quite simple to understand at a conceptual level:

    • A provider with metrics consistently 2.5 per cent or more above benchmark might be rated as Gold/Outstanding;
    • A provider whose metrics are consistently within ±2.5 per cent of their benchmarks, would be likely assessed as Silver/Very High Quality;
    • Providers who are consistently 2.5 per cent or more below their benchmark would be Bronze/High Quality or Requires Improvement.

    There is no stated numerical threshold that is consistent with the boundary between Bronze and Requires Improvement – a matter of holistic panel judgement, including but not limited to how far beyond -2.5 per cent of benchmark a provider’s data sits.

    It is worth noting here that in the current TEF, Bronze ratings (somewhat confusingly) could only be conferred for providers who could also demonstrate some elements of Silver/Very High Quality provision. Under the new TEF proposals, this requirement would be dropped.

    The challenge we see here is with the definition of Bronze being >2.5 per cent below benchmark; the issue is best illustrated with an example of two hypothetical Bronze providers:

    Let’s assume both Provider A and B have received a Bronze rating in TEF, because their metrics were consistently more than 2.5 per cent below benchmark, and their written submissions and context did not provide any basis on which a higher rating ought to be awarded. For simplicity, let’s pick a single metric, progression into graduate employment, and assume that the benchmark for these two providers happens to be the same, at 78 per cent.

    In this example, Provider A obtained its Bronze rating with a progression figure of 75 per cent, which is 3 per cent below its benchmark. Provider B, on the other hand, had a Progression figure of 63 per cent. While this is a full 12 percentage points worse than Provider A, it is nonetheless still 2 per cent above the minimum threshold specified by OfS, which is 60 per cent, and so it was not rated as Requires Improvement.

    Considering this example, it seems reasonable to conclude that Provider A is doing a far better job of supporting a comparable cohort of students into graduate employment than Provider B, but under the new TEF proposals, both are judged as being Bronze, and would be subject to the same regulatory penalties proposed in the consultation. From a prospective student’s perspective, it is hard to see what value these ratings would carry, given they conceal very large differences in the actual performance of the providers.

    On the assumption that the Requires Improvement category would be retained for providers with more serious challenges – such as being below minimum thresholds in several areas – the obvious problem is that Bronze as a category in the current proposal is simply being stretched so far, it will lose any useful meaning. In short, the new Bronze category is too blunt a tool.

    An alternative – meet Meets Minimum Requirements

    As a practical solution, we recommend that OfS considers a fifth category, sitting between Bronze and Requires Improvement: a category of Meets Minimum Requirements.

    This approach would have two advantages. First, it would allow the continued use of Bronze, Silver and Gold in such a way that the terms retain their commonly understood meanings; a Bronze award, in common parlance, is not a mark of failure. Second, it would allow OfS to distinguish providers who, while below our benchmark for Very High Quality, are still within a reasonable distance of their benchmark such that a judgement of High Quality remains appropriate, from those whose gap to benchmark is striking and could indicate a case for regulatory intervention.

    The judgement of Meets Minimum Requirements would mean the provider’s outcomes do not fall below the absolute minimum thresholds set by the regulator, but equally are too far from their benchmark to be awarded a quality kitemark of at least a Bronze TEF rating. The new category would reasonably be subject to increased regulatory surveillance, given the borderline risk of thus rated providers failing to meet minimum standards in future.

    We argue that such a model would be far more meaningful to students and other stakeholders. TEF ratings of Bronze, Silver and Gold would continue to represent an active recognition of High, Very High, and Outstanding quality, respectively. In addition, providers meeting minimum requirements (but not having earned a quality kitemark in the form of a TEF award) would be distinguishable from providers who would be subject to active intervention from the regulator, due to falling below the absolute minimum standards.

    It would be a matter for government to consider whether providers deemed to be meeting minimum requirements should receive inflation-linked uplifts in fees, and should be permitted to grow; indeed, one constructive use of the increased grading nuance we propose here could be that providers who meet minimum requirements are subject to student number caps until they can demonstrate capability to grow safely by improving to the point of earning at least a Bronze TEF award. Such a measure would seem proportionately protective of the student interest, while still differentiating those providers from providers who are actively breaching their conditions of registration and would be subject to direct regulatory intervention.

    Modelling the impact

    To model how this proposed approach might impact overall outcomes in a future TEF, we have, in the exercise that follows, used TEF 2023 dashboard data and retained the statistical definitions of Gold (>2.5 per cent above benchmark) and Silver (±2.5% of benchmark) from the current TEF. We have modelled a proposed definition of Bronze as between 2.5-5 per cent below benchmark. Providers who Meet Minimum Requirements are defined as being within 5-10 per cent below benchmark, and Requires Improvement reflects metrics >10 per cent below benchmark.

    For the sake of simplicity, we have taken the average distance from benchmark for all Student Experience and Student Outcomes metrics for each provider to categorise providers for each Aspect Rating. The outcome of our analysis is shown in Table A, and is contrasted in Table B with an equivalent analysis under OfS’s current proposals to redefine a four-category framework.

    Table A. Distribution of aspect ratings according to a five-category TEF framework

    Table B. Distribution of aspect ratings according to OfS’s proposed four-category TEF framework

    Following OfS’s proposal that a provider would be given an overall rating that reflects the lowest rating of the two aspects, our approach leads to a total of 32 providers falling into the Meets Minimum Requirements and Requires Improvement categories. This represents 14 per cent of providers, which is substantially fewer than the 39 per cent of providers who would be considered as not meeting high quality expectations under the current OfS proposals. It is also far closer to the 22 per cent of providers who were rated Bronze or Requires Improvement in TEF 2023.

    We believe that our approach represents a far more valid and meaningful framework for assessing quality in the sector, while OfS’ current proposals risk sending a problematic message that, since 2023, quality across the sector has inexplicably and catastrophically declined. Adding granularity to the ratings system in this way will help OfS to focus its regulatory surveillance where it will likely be the most useful in targeting provision that is of potentially low quality.

    Figure 4, below, illustrates the distribution of potential TEF outcomes based on OfS’s four category rating framework, contrasted with our proposed five categories. It is important to note that this modelling is based purely on metrics and benchmarks, and does not incorporate the final judgement of TEF panels, based on the narrative submissions providers submit.

    This is particularly important because previous analysis has shown that many providers with metrics that were not significantly above benchmark, or not significantly at benchmark, were nonetheless awarded Gold or Silver ratings, respectively, and this would have been based on robust narrative submissions and other evidence submitted by providers. Equally, some providers with data that was broadly in line with benchmark were awarded Bronze ratings overall, as the further evidence submitted in the narrative statements failed to convince the panel of an overall picture of very high quality.

    Figure 4: Predicted profile of provider ratings in a four- and five-category framework

    The benefits of a five-category approach

    First, the concept of a TEF award in the form of a Gold, Silver or Bronze rating retains its meaning for students and other stakeholders. Any of these three awards reflect something positive about a provider delivering beyond what we minimally expect.

    Second, the pool of providers potentially falling into categories that would prompt enhanced scrutiny and potential regulatory intervention/governmental restrictions would drop to a level that would be a much fairer reflection of the actual quality of our sector. We simply do not believe it to be the case that anyone can be convinced that as much as 40 per cent of our sector is not of sufficiently high quality.

    Third, referencing the socio-economic diversity data by 2023 TEF award in Tables 1 and 2, and the future TEF outcomes modelling in Figure 1, our proposal significantly reduces the risk that students who were previously eligible for free school meals (who form strong proportions of the cohorts of Bronze-rated providers) would be further disadvantaged by their HE environment being impoverished via fee freezes and student number caps. We argue that such potential measures should be reserved for the Requires Improvement, and, plausibly, Meets Minimum Requirements categories.

    Fourth, by expanding the range of categories, OfS would be able to distinguish to between providers who are in fact meeting minimum expectations, but not delivering quality in experience or outcomes which would allow them to benefit from some of the freedoms proposed to be associated with TEF awards, and providers who are, in at least one of these areas, failing to meet even those minimum expectations.

    To recap, the key features of our proposal are as follows:

    • Retain Bronze, Silver and Gold in the TEF as ratings that reflect a positive judgement of High, Very High, and Outstanding quality, respectively.
    • Introduce a new rating – Meets Minimum Requirements – that recognises providers who are delivering student experience and outcomes that are above regulatory minimum thresholds, but are too far from benchmarks to justify an active quality award in TEF. This category would be subject to increased OfS surveillance, given the borderline risk of provision falling below minimum standards in future.
    • Retain Requires Improvement as a category that indicates a strong likelihood that regulatory intervention is required to address more serious performance issues.
    • Continue to recognise Bronze ratings as a mark of High Quality, and position the threshold for additional regulatory restrictions or intervention such that these would apply only to providers rated as Meets Minimum Requirements or Requires Improvement.

    Implementing this modest adaptation to the current TEF proposals would safeguard the deserved reputation of UK higher education for high-quality provision, while meeting the demand for a clear plan to secure improvements to quality and tackle pockets of poor quality.

    The deadline for responding to OfS’ consultation on TEF and the integrated approach to quality is Thursday 11 December. 

    Source link

  • Quality assurance behind the dashboard

    Quality assurance behind the dashboard

    The depressing thing about the contemporary debate on the quality of higher education in England is how limited it is.

    From the outside, everything is about structures, systems, and enforcement: the regulator will root out “poor quality courses” (using data of some sort), students have access to an ombuds-style service in the Office for the Independent Adjudicator, the B3 and TEF arrangements mean that regulatory action will be taken. And so on.

    The proposal on the table from the Office for Students at the moment doubles down on a bunch of lagging metrics (continuation, completion, progression) and one limited lagging measure of student satisfaction (NSS) underpinning a metastasised TEF that will direct plaudits or deploy increasingly painful interventions based on a single precious-metal scale.

    All of these sound impressive, and may give your academic registrar sleepless nights – but none of them offer meaningful and timely redress to the student who has turned up for a 9am lecture to find that nobody has turned up to deliver it – again. Which is surely the point.

    It is occasionally useful to remember how little this kind of visible sector level quality assurance systems have to do with actual quality assurance as experienced by students and others, so let’s look at how things currently work and break it down by need state.

    I’m a student and I’m having a bad time right now

    Continuation data and progression data published in 2025 reflects the experience of students who graduated between 2019 and 2022; completion data refers to cohorts between 2016 and 2019; the NSS reflects the opinions of final year students and is published the summer after they graduate. None of these contain any information about what is happening in labs, lecture theatres, and seminar rooms right now.

    As students who have a bad experience in higher education don’t generally get the chance to try it again, any useful system of quality assurance needs to be able to help students in the moment – and the only realistic way that this can happen is via processes within a provider.

    From the perspective of the student the most common of these are module feedback (the surveys conducted at the end of each unit of teaching) and the work of the student representative (a peer with the ability to feedback on behalf of students). Beyond this students have the ability to make internal complaints, ranging from a quiet word with the lecturer after the seminar to a formal process with support from the Students’ Union.

    While little national attention has been paid in recent years to these systems and pathways they represent pretty much the only chance that an issue students are currently facing can be addressed before it becomes permanent.

    The question needs to be whether students are aware of these routes and feel confident in using them – it’s fair to say that experience is mixed across the sector. Some providers are very responsive to the student voice, others may not be as quick or as effective as they should be. Our only measure of these things is via the National Student Survey – about 80 per cent of the students in the 2025 cohort agree that students’ opinions about their course are valued by staff, while a little over two-thirds agree that it is clear that student feedback is acted upon.

    Both these are up on equivalent questions about five years ago, suggesting a slow improvement in such work, but there is scope for such systems to be reviewed and promoted nationally – everything else is just a way for students to possibly seek redress long after anything could be done about it.

    I’m a graduate and I don’t know what my degree is worth/ I’m an employer and I need graduate skills

    The value of a degree is multifaceted – and links as much to the reputation of a provider or course as to the hard work of a student.

    On the former much the heavy lifting is done by the way the design of a course conforms to recognised standards. For more vocational courses, these are likely to have been set by professional, statutory, and regulatory bodies (PSRBs) – independent bodies who set requirements (with varying degrees of specificity) around what should be taught on a course and what a graduate should be capable of doing or understanding.

    Where no PSRB exists, course designers are likely to map to the QAA Subject Benchmarks, or to draw on external perspectives from academics in other universities. As links between universities and local employment needs solidify, the requirements set by local skills improvement plans (LSIPs) will play a growing part – and it is very likely that these will be mapped to the UK Standard Skills Classification descriptors.

    The academic standing of a provider is nominally administered by the regulator – in England the Office for Students has power to deregister a provider where there are concerns, making it ineligible for state funding and sparking a media firestorm that will likely torch any remaining residual esteem. Events like this are rare – standards are generally maintained via a semi-formal system of cross-provider benchmarking and external examination, leavened by the occasional action of whistleblowers.

    That’s also a pretty good description about how we assure that the mark a graduate awarded makes sense when compared to the marks awarded to other graduates. External examiners here play a role in ensuring that standards are consistent within a subject, albeit usually at module rather than course level; it’s another system that has been allowed (and indeed actively encouraged) to atrophy, but it still remains the only way of doing this stuff in anything approaching real time.

    I’m an international partner and I can’t be sure that these qualifications align with what we do

    Collaborating internationally, or even studying internationally, often requires some very specific statements around the quality of provision. One popular route to doing this is being able to assert that your provider meets well-understood international standards – the ESG (standards and guidelines for quality assurance in the European Higher Education Area) represent probably the most common example.

    Importantly, the ESG does not set standards about teaching and learning, or awarding qualifications – it sets standards for the way institutional quality assurance processes are assessed by national bodies. If you think that this is incredibly arm’s length you would be right, but it is also the only way of ensuring that the bits of quality assurance that interface with the student experience in near-real-time actually work.

    I am an academic and I want to design courses and teach students in ways that help students to succeed

    Quality enhancement – beyond compliance with academic standards – is about supporting academic staff in making changes to teaching and learning practice (how lectures are delivered, how assessments are designed, how individual support is offered). It is often seen as an add-on, but should really be seen as a core component of any system of quality assurance. Indeed, in Scotland, regulatory quality assurance in the form of the Tertiary Quality Enhancement Framework starts from the premise that tertiary provision needs to be “high quality” and “improving”.

    Outside of Scotland the vestiges of a previous UK wide approach to quality enhancement exists in the form of AdvanceHE. Many academic staff will first encounter the principles and practice of teaching quality enhancement via developing a portfolio to submit for fellowship – increasingly a prerequisite for academic promotions. AdvanceHE also supports standards which are designed to underpin training in teaching for new academic staff, and support networks. The era of institutional “learning and teaching offices” (another vestige of a previous government-sponsored measure to support enhancement) is mostly over, but many providers have networks of staff with an interest in the practice of teaching in higher education.

    So what does the OfS actually do?

    In England, the Office for Students operates a deficit model of quality assurance. It assumes that, unless there is some evidence to the contrary, an institution is delivering higher education at an appropriate level of quality. Where the evidence exists for poor performance, the regulator will intervene directly. This is the basis of a “risk based” approach to quality assurance, where more effort can be expended in areas of concern and less burden placed on providers.

    For a system like this to work in a way that addresses any of the needs detailed above, OfS would need far more, and more detailed, information on where things are going wrong as soon as they happen. It would need to be bold in acting quickly, often based on incomplete or emerging evidence. Thus far, OfS has been notably adverse to legal risk (having had its fingers burned by the Bloomsbury case), and has failed (despite a sustained attempt in the much-maligned Data Futures) to meaningfully modernise the process of data collection and analysis.

    It would be simpler and cheaper for OfS to support and develop institutions’ own mechanisms to support quality and academic standards – an approach that would allow for student issues to be dealt with quickly and effectively at that level. A stumbling block here would be the diversity of the sector, with the unique forms and small scale of some providers making it difficult to design any form of standardisation into these systems. The regulator itself, or another body such as the Office for the Independent Adjudicator (as happens now), would act as a backstop for instances where these processes do not produce satisfactory results.

    The budget of the Office for Students has grown far beyond the ability of the sector to support it (as was originally intended) via subscription. It receives more than £10m a year from the Department for Education to cover its current level of activity – it feels unlikely that more funds will arrive from either source to enable it to quality assure 420 providers directly.

    All of this would be moot if there were no current concerns about quality and standards. And there are many – stemming both from corners being cut (and systems being run beyond capacity) due to financial pressures, and from a failure to regulate in a way that grows and assures a provider’s own capacity to manage quality and standards. We’ve seen evidence from the regulator itself that the combination of financial and regulatory failures has led to many examples of quality and standards problems: course and modules closed without suitable alternatives for students, difficulties faced by students in accessing staff and facilities due to overcrowding or underprovision, and concerns about an upward pressure on marks from a need to bolster continuation and completion rates.

    The route through the current crisis needs to be through improvement in providers’ own processes, and that would take something that the OfS has not historically offered the sector: trust.

    Source link

  • High quality learning means developing and upskilling educators on the pedagogy of AI

    High quality learning means developing and upskilling educators on the pedagogy of AI

    There’s been endless discussion about what students do with generative AI tools, and what constitutes legitimate use of AI in assessment, but as the technology continues to improve there’s a whole conversation to be had about what educators do with AI tools.

    We’re using the term “educators” to encompass both the academics leading modules and programmes and the professionals who support, enable and contribute to learning and teaching and student support.

    Realising the potential of the technologies that an institution invests in to support student success requires educators to be willing and able to deploy it in ways that are appropriate for their context. It requires them to be active and creative users of that technology, not simply following a process or showing compliance with a policy.

    So it was a bit worrying when in the course of exploring what effective preparation for digital learning futures could look like for our Capability for change report last year, it was noticeable how concerned digital and education leaders were about the variable digital capabilities of their staff.

    Where technology meets pedagogy

    Inevitably, when it comes to AI, some HE staff are enthusiastic early adopters and innovators; others are more cautious or less confident – and some are highly critical and/or just want it to go away. Some of this is about personal orientation towards particular technologies – there is a lively and important critical debate about how society comes into a relationship with AI technology and the implications for, well, the future of humanity.

    Some of it is about the realities of the pressures that educators are under, and the lack of available time and headspace to engage with developmental activity. As one education leader put it:

    Sometimes staff, they know that they need to change what they’re doing, but they get caught in the academic cycle. So every year it’s back to teaching again, really, really large groups of students; they haven’t had the time to go and think about how to do things differently.

    But there’s also an institutional strategic challenge here about situating AI within the pedagogic environment – recognising that students will not only be using it habitually in their work and learning, but that they will expect to graduate with a level of competence in it in anticipation of using AI in the workplace. There’s an efficiency question about how using AI can reprofile educator working patterns and workflows. Even if the prospect of “freeing up” lots of time might feel a bit remote right now, educators are clearly going to be using AI in interesting ways to make some of their work a bit more efficient, to surface insight from large datasets that might not otherwise be accessible, or as a co-creator to help enhance their thinking and practice.

    In the context of learning and teaching, educators need to be ready to go beyond asking “how do the tools work and what can I do with them?” and be prepared to ask and answer a larger question: “what does it mean for academic quality and pedagogy when I do?”

    As Tom Chatfield has persuasively argued in his recent white paper on AI and the future of pedagogy, AI needs to have a clear educative purpose when it is deployed in learning and teaching, and should be about actively enhancing pedagogy. Reaching this halcyon state requires educators who are not only competent in the technical use of the tools that are available but prepared to work creatively to embed those tools to achieve particular learning objectives within the wider framework and structures of their academic discipline. Expertise of this nature is not cheaply won – it takes time and resource to think, experiment, test, and refine.

    Educators have the power – and responsibility – to work out how best to harness AI in learning and teaching in their disciplines, but education leaders need to create the right environment for innovation to flourish. As one leader put it:

    How do we create an environment where we’re allowing people to feel like they are the arbiters of their own day to day, that they’ve got more time, that they’re able to do the things that they want to do?…So that’s really an excitement for me. I think there’s real opportunity in digital to enable those things.

    Introducing “Educating the AI generation”

    For our new project “Educating the AI generation” we want to explore how institutions are developing educator AI literacy and practice – what frameworks, interventions, and provisions are helpful and effective, and where the barriers and challenges lie. What sort of environment helps educators to develop not just the capability, but also the motivation and opportunity to become skilled and critical users of AI in learning and teaching? And what does that teach us about how the role of educators might change as the higher education learning environment evolves?

    At the discussion session Rachel co-hosted alongside Kortext advisor Janice Kay at the Festival of Higher Education earlier this month there was a strong sense among attendees that educating the AI generation requires universities to take action on multiple fronts simultaneously if they are to keep up with the pace of change in AI technology.

    Achieving this kind of agility means making space for risk-taking, and moving away from compliance-focused language to a more collaborative and exploratory approach, including with students, who are equally finding their feet with AI. For leaders, that could mean offering both reassurance that this approach is welcomed, and fostering spaces in which it can be deployed.

    In a time of such fast-paced change, staying grounded in concepts of what it means to be a professional educator can help manage the potential sense of threat from AI in learning and teaching. Discussions focused on the “how” of effective use of AI, and the ways it can support student learning and educator practice, are always grounded in core knowledge of pedagogy and education.

    On AI in assessment, it was instructive to hear student participants share a desire to be able to demonstrate learning and skills above and beyond what is captured in traditional assessment, and find different, authentic ways to engage with knowledge. Assessment is always a bit of a flashpoint in pedagogy, especially in constructing students’ understanding of their learning, and there is an open question on how AI technology can support educators in assessment design and execution. More prosaically, the risks to traditional assessment from large language models indicate that staff may need to spend proportionally more of their time on managing assessment going forward.

    Participants drew upon the experiences of the Covid pivot to emergency remote teaching and taking the best lessons from trialling new ways of learning and teaching as a useful reminder that the sector can pivot quickly – and well – when required. Yet the feeling that AI is often something of a “talking point” rather than an “action point” led some to suggest that there may not yet be a sufficiently pressing sense of urgency to kickstart change in practice.

    What is clear about the present moment is that the sector will make the most progress on these questions when there is sharing of thinking and practice and co-development of approaches. Over the next six months we’ll be building up our insight and we’d love to hear your views on what works to support educator development of AI in pedagogy. We’re not expecting any silver bullets, but if you have an example of practice to share, please get in touch.

    This article is published in association with Kortext. Join Debbie, Rachel and a host of other speakers at Kortext LIVE on Wednesday 11 February in London, where we’ll be discussing some of our findings – find out more and book your place here.

    Source link

  • Measuring What Matters: A Faculty Development System That Improves Teaching Quality – Faculty Focus

    Measuring What Matters: A Faculty Development System That Improves Teaching Quality – Faculty Focus

    Source link

  • Measuring What Matters: A Faculty Development System That Improves Teaching Quality – Faculty Focus

    Measuring What Matters: A Faculty Development System That Improves Teaching Quality – Faculty Focus

    Source link

  • Why busy educators need AI with guardrails

    Why busy educators need AI with guardrails

    Key points:

    In the growing conversation around AI in education, speed and efficiency often take center stage, but that focus can tempt busy educators to use what’s fast rather than what’s best. To truly serve teachers–and above all, students–AI must be built with intention and clear constraints that prioritize instructional quality, ensuring efficiency never comes at the expense of what learners need most.

    AI doesn’t inherently understand fairness, instructional nuance, or educational standards. It mirrors its training and guidance, usually as a capable generalist rather than a specialist. Without deliberate design, AI can produce content that’s misaligned or confusing. In education, fairness means an assessment measures only the intended skill and does so comparably for students from different backgrounds, languages, and abilities–without hidden barriers unrelated to what’s being assessed. Effective AI systems in schools need embedded controls to avoid construct‑irrelevant content: elements that distract from what’s actually being measured.

    For example, a math question shouldn’t hinge on dense prose, niche sports knowledge, or culturally-specific idioms unless those are part of the goal; visuals shouldn’t rely on low-contrast colors that are hard to see; audio shouldn’t assume a single accent; and timing shouldn’t penalize students if speed isn’t the construct.

    To improve fairness and accuracy in assessments:

    • Avoid construct-irrelevant content: Ensure test questions focus only on the skills and knowledge being assessed.
    • Use AI tools with built-in fairness controls: Generic AI models may not inherently understand fairness; choose tools designed specifically for educational contexts.
    • Train AI on expert-authored content: AI is only as fair and accurate as the data and expertise it’s trained on. Use models built with input from experienced educators and psychometricians.

    These subtleties matter. General-purpose AI tools, left untuned, often miss them.

    The risk of relying on convenience

    Educators face immense time pressures. It’s tempting to use AI to quickly generate assessments or learning materials. But speed can obscure deeper issues. A question might look fine on the surface but fail to meet cognitive complexity standards or align with curriculum goals. These aren’t always easy problems to spot, but they can impact student learning.

    To choose the right AI tools:

    • Select domain-specific AI over general models: Tools tailored for education are more likely to produce pedagogically-sound and standards-aligned content that empowers students to succeed. In a 2024 University of Pennsylvania study, students using a customized AI tutor scored 127 percent higher on practice problems than those without.
    • Be cautious with out-of-the-box AI: Without expertise, educators may struggle to critique or validate AI-generated content, risking poor-quality assessments.
    • Understand the limitations of general AI: While capable of generating content, general models may lack depth in educational theory and assessment design.

    General AI tools can get you 60 percent of the way there. But that last 40 percent is the part that ensures quality, fairness, and educational value. This requires expertise to get right. That’s where structured, guided AI becomes essential.

    Building AI that thinks like an educator

    Developing AI for education requires close collaboration with psychometricians and subject matter experts to shape how the system behaves. This helps ensure it produces content that’s not just technically correct, but pedagogically sound.

    To ensure quality in AI-generated content:

    • Involve experts in the development process: Psychometricians and educators should review AI outputs to ensure alignment with learning goals and standards.
    • Use manual review cycles: Unlike benchmark-driven models, educational AI requires human evaluation to validate quality and relevance.
    • Focus on cognitive complexity: Design assessments with varied difficulty levels and ensure they measure intended constructs.

    This process is iterative and manual. It’s grounded in real-world educational standards, not just benchmark scores.

    Personalization needs structure

    AI’s ability to personalize learning is promising. But without structure, personalization can lead students off track. AI might guide learners toward content that’s irrelevant or misaligned with their goals. That’s why personalization must be paired with oversight and intentional design.

    To harness personalization responsibly:

    • Let experts set goals and guardrails: Define standards, scope and sequence, and success criteria; AI adapts within those boundaries.
    • Use AI for diagnostics and drafting, not decisions: Have it flag gaps, suggest resources, and generate practice, while educators curate and approve.
    • Preserve curricular coherence: Keep prerequisites, spacing, and transfer in view so learners don’t drift into content that’s engaging but misaligned.
    • Support educator literacy in AI: Professional development is key to helping teachers use AI effectively and responsibly.

    It’s not enough to adapt–the adaptation must be meaningful and educationally coherent.

    AI can accelerate content creation and internal workflows. But speed alone isn’t a virtue. Without scrutiny, fast outputs can compromise quality.

    To maintain efficiency and innovation:

    • Use AI to streamline internal processes: Beyond student-facing tools, AI can help educators and institutions build resources faster and more efficiently.
    • Maintain high standards despite automation: Even as AI accelerates content creation, human oversight is essential to uphold educational quality.

    Responsible use of AI requires processes that ensure every AI-generated item is part of a system designed to uphold educational integrity.

    An effective approach to AI in education is driven by concern–not fear, but responsibility. Educators are doing their best under challenging conditions, and the goal should be building AI tools that support their work.

    When frameworks and safeguards are built-in, what reaches students is more likely to be accurate, fair, and aligned with learning goals.

    In education, trust is foundational. And trust in AI starts with thoughtful design, expert oversight, and a deep respect for the work educators do every day.

    Latest posts by eSchool Media Contributors (see all)

    Source link