As AI increasingly automates technical tasks across industries, students’ long-term career success will rely less on technical skills alone and more on durable skills or professional skills, often referred to as soft skills. These include empathy, resilience, collaboration, and ethical reasoning–skills that machines can’t replicate.
This critical need is outlined in Future-Proofing Students: Professional Skills in the Age of AI, a new report from Acuity Insights. Drawing on a broad body of academic and market research, the report provides an analysis of how institutions can better prepare students with the professional skills most critical in an AI-driven world.
Key findings from the report:
75 percent of long-term job success is attributed to professional skills, not technical expertise.
Over 25 percent of executives say they won’t hire recent graduates due to lack of durable skills.
COVID-19 disrupted professional skill development, leaving many students underprepared for collaboration, communication, and professional norms.
Eight essential durable skills must be intentionally developed for students to thrive in an AI-driven workplace.
“Technical skills may open the door, but it’s human skills like empathy and resilience that endure over time and lead to a fruitful and rewarding career,” says Matt Holland, CEO at Acuity Insights. “As AI reshapes the workforce, it has become critical for higher education to take the lead in preparing students with these skills that will define their long-term success.”
The eight critical durable skills include:
Empathy
Teamwork
Communication
Motivation
Resilience
Ethical reasoning
Problem solving
Self-awareness
These competencies don’t expire with technology–they grow stronger over time, helping graduates adapt, lead, and thrive in an AI-driven world.
The report also outlines practical strategies for institutions, including assessing non-academic skills at admissions using Situational Judgment Tests (SJTs), and shares recommendations on embedding professional skills development throughout curricula and forming partnerships that bridge AI literacy with interpersonal and ethical reasoning.
As the higher education sector in England gets deeper into the metaphorical financial woods, the frequency of OfS updates on the sector’s financial position increases apace.
Today’s financial sustainability bulletin constitutes an update to the regulator’s formal annual assessment of sector financial sustainability published in May 2025. The update takes account of the latest recruitment data and any policy changes that could affect the sector’s financial outlook that would not have been taken into account at the point that providers submitted their financial returns to OfS ahead of the May report.
Recruitment headlines
At sector level, UK and international recruitment trends for autumn 2025 entry have shown growth by 3.1 per cent and 6.3 per cent respectively. But this is still lower than the aggregate sector forecasts of 4.1 per cent and 8.6 per cent, which OfS estimates could result in a total sector wide net loss of £437.8m lower than forecast tuition fee income. “Optimism bias” in financial forecasting might have been dialled back in recent years following stiff warnings from OfS, but these figures suggest it’s still very much a factor.
Growth has also been uneven across the sector, with large research intensive institutions increasing UK undergraduate numbers at a startling 9.9 per cent in 2025 (despite apparently collectively forecasting a modest decline of 1.7 per cent), and pretty much everyone else coming in lower than forecast or taking a hit. Medium-sized institutions win a hat tip for producing the most accurate prediction in UK undergraduate growth – actual growth of 2.3 per cent compared to projected growth of 2.7 per cent.
The picture shifts slightly when it comes to international recruitment, where larger research-intensives have issued 3.3 per cent fewer Confirmations of Acceptance of Studies (CAS) against a forecasted 6.6 per cent increase, largely driven by reduction in visas issued to students from China. Smaller and specialist institutions by contrast seem to have enjoyed growth well beyond forecast. The individual institutional picture will, of course, vary even more – and it’s worth adding that the data is not perfect, as not every student applies through UCAS.
Modelling the impact
OfS has factored in all of the recruitment data it has, and added in new policy announcements, including estimation of the impact of the indexation of undergraduate tuition fees, and increases to employers National Insurance contributions, but not the international levy because nobody knows when that is happening or how it will be calculated. It has then applied its model to providers’ financial outlook.
The headline makes for sombre reading – across all categories of provider OfS is predicting that if no action were taken, the numbers of providers operating in deficit in 2025–26 would rise from 96 to 124, representing on increase from 35 per cent of the sector to 45 per cent.
Contrary to the impression given by UK undergraduate recruitment headlines, the negative impact isn’t concentrated in any one part of the sector. OfS modelling suggests that ten larger research-intensive institutions could tip into deficit in 2025–26, up from five that were already forecasting themselves to be in that position. The only category of provider where OfS estimates indicate fewer providers in deficit than forecast is large teaching-intensives.
The 30 days net liquidity is the number you need to keep an eye on because running out of cash would be much more of a problem than running a deficit for institutional survival. OfS modelling suggests that the numbers reporting net liquidity of under 30 days could rise from 41 to 45 in 2025–26, with overall numbers concentrated in the smaller and specialist/specialist creative groups.
What it all means
Before everyone presses the panic button, it’s really important to be aware, as OfS points out, that providers will be well aware of their own recruitment data and the impact on their bottom line, and will have taken what action they can to reduce in-year costs, though nobody should underestimate the ongoing toll those actions will have taken on staff and students.
Longer term, as always, the outlook appears sunnier, but that’s based on some ongoing optimism in financial forecasting. If, as seems to keep happening, some of that optimism turns out to be misplaced, then the financial struggles of the sector are far from over.
Against this backdrop, the question remains less about who might collapse in a heap and more about how to manage longer term strategic change to adapt providers’ business models to the environment that higher education providers are operating in. Though government has announced that it wants providers to coordinate, specialise and collaborate, while the sector continues to battle heavy financial weather those aspirations will be difficult to realise, however desirable they might be in principle.
Iowa City, Iowa and Dallas, Texas (November 12, 2025) – ACT, a leader in college and career readiness assessment, and Texas Instruments Education Technology (TI), a division of the global semiconductor company, today announced a comprehensive partnership aimed at empowering students to achieve their best performance on the ACT mathematics test.
This initiative brings together two education leaders to provide innovative resources and tools that maximize student potential. The partnership will start by providing:
A new dedicated online resource center featuring co-branded instructional videos demonstrating optimal use of TI calculators during the ACT mathematics test.
Additional study materials featuring TI calculators to help students build upon and apply their mathematical knowledge while maximizing their time on the ACT test.
“This partnership represents our commitment to providing students with the tools and resources they need to demonstrate their mathematical knowledge effectively,” said Andrew Taylor, Senior Vice President of Educational Solutions and International, ACT, “By working with Texas Instruments, we’re ensuring students have access to familiar, powerful technology tools during this important assessment.”
“Texas Instruments is proud to partner with ACT to support student success,” said Laura Chambers, President at Texas Instruments Education Technology. “Our calculator technology, combined with targeted instructional resources, will help students showcase their true mathematical abilities during the ACT test.”
ACT is transforming college and career readiness pathways so that everyone can discover and fulfill their potential. Grounded in more than 65 years of research, ACT’s learning resources, assessments, research, and work-ready credentials are trusted by students, job seekers, educators, schools, government agencies, and employers in the U.S. and around the world to help people achieve their education and career goals at every stage of life. Visit us at https://www.act.org/.
About Texas Instruments
Texas Instruments Education Technology (TI) — the gold standard for excellence in math — provides exam-approved graphing calculators and interactive STEM technology. TI calculators and accessories drive student understanding and engagement without adding to online distractions. We are committed to empowering teachers, inspiring students and supporting real learning in classrooms everywhere. For more information, visit education.ti.com.
Texas Instruments Incorporated (Nasdaq: TXN) is a global semiconductor company that designs, manufactures and sells analog and embedded processing chips for markets such as industrial, automotive, personal electronics, enterprise systems and communications equipment. At our core, we have a passion to create a better world by making electronics more affordable through semiconductors. This passion is alive today as each generation of innovation builds upon the last to make our technology more reliable, more affordable and lower power, making it possible for semiconductors to go into electronics everywhere. Learn more at TI.com.
eSchool Media staff cover education technology in all its aspects–from legislation and litigation, to best practices, to lessons learned and new products. First published in March of 1998 as a monthly print and digital newspaper, eSchool Media provides the news and information necessary to help K-20 decision-makers successfully use technology and innovation to transform schools and colleges and achieve their educational goals.
This blog was kindly authored by Professor Sir Chris Husbands, who is a Director of Higher Futures and a HEPI Trustee. He was previously the Vice-Chancellor of Sheffield Hallam University.
The Final Report of the Curriculum and Assessment Review led by Becky Francis has been published. At 196 pages, with 16 pages of recommendations, it is a long and complex document – long and complex enough for early commentary to find quite different things in it. The Daily Telegraph headline was ‘five year olds to learn climate change under Labour’ (a Bad Thing). The Guardian headline was that the curriculum “should focus less on exams and more on life skills” (a Good Thing). The Times was more neutral, picking up on “more diversity — and fewer exams at GCSE” (which could be Bad Things or Good Things). The report’s length and attention to analytical detail (there are 478 footnotes and what is by comparison a brief analytical annex of 37 pages) make it difficult to summarise. But so do the granularity of the recommendations, which include such recommendations as that the Government reviews “how the PE Key Stage 1 to 4 Programmes of Study refer to Dance, including whether they are sufficiently specific” (p 106), or that it “makes minor refinements to the Geography Programmes of Study” (p 83). The detail and granularity are consistent with Becky Francis’s repeated statement that her review would be “evolutionary not revolutionary”, and there is enough here to keep curriculum leaders in schools and assessment policy wonks in awarding bodies busy for a very long time. But they make it difficult to see the underlying ideas.
But underlying ideas are there. From the perspective of higher education policy, there are probably three. The first is commitment to evolution rather than revolution. The school curriculum in five years’ time will not be unrecognisable from now. It will continue to be a subject-based, knowledge-led curriculum. Some of the more egregious aspects of the Gove reforms of a decade ago – the obsession with formal teaching of technical grammar such as ‘fronted adverbials’ (p 75), the overloading of history programmes of study (p 85) and the obsession with pre-twentieth century literature (p79) — are abandoned. There is greater emphasis on diversity and inclusion “so that all young people can see themselves represented” in what they learn (p 10). The review is conservative on assessment. The burden of assessment at 16 is to be reduced, suggesting that a reduction of 10% in GCSE examinations is feasible to increase time for teaching (p 135), but not changed in its overall form. Government “[should] continue to employ the principle that non-exam assessment should be used only when it is the only valid way to assess essential elements of a subject” (pp 138, 193), because “externally set and marked exams remain the fairest and most reliable method of assessment” (p 136).
This rebalancing, with less prescription, greater clarity between what is statutory and what is advisory and the reduced volume of GCSE (but not, unless I have missed it, A-level) assessment is the key to a second underlying idea which is an attempt to enhance teacher agency. This is cautious. The report is strong on pupil entitlement: this is a “national curriculum … for all our children and young people” (p180), which obviously means that it should apply to academies as well as local authority maintained schools, and that it should apply to all learners, not least because “learners with the lowest GCSE attainment (particularly grades 1 or 2) have fundamental knowledge gaps that extend to earlier key stages” (p163). But the report also recognises that any curriculum depends on those who teach it, and it recommends that the overhaul of programmes of study “involve… teachers in the testing and design of Programmes of Study as part of the drafting process” (p54). In all curriculum reform, the balance between prescription and latitude, between entitlement and flexibility is difficult. The report creates more space for teachers to make choices, but it retains a strong prescriptive core, and in some cases extends that: under current arrangements, Religious Education, although compulsory is outside the national curriculum. Francis wants the content brought into national requirements, and there will be hard fought arguments about RE.
The third key idea is about the 16-19 curriculum. The review stresses the key importance of level 3 learning and qualifications in “shaping life chances and supporting our economy” (p31), and it is sensibly less clear-cut than the interim report appeared to be on the differentiation between academic and vocational pathways at 16-19. Arguably, the report is agnostic about which Level 3 pathway students pursue, providing that more do access level 3. The report recommends abandoning the English Baccalaureate performance measure on the evidential grounds that “whilst well-intentioned [it has] not achieved” its goal and has “to some degree unnecessarily constrained students’ choices” (p10). The EBacc will be largely unmourned, except by its ministerial architects. The Review’s strong focus on progression is critical and may be its most important feature in the long term. Francis “thinks it is important that as many learners as possible who have achieved level 2 (five GCSEs at grade 4 or above or the equivalent) should be supported to study at level 3” (p141). This is obviously important for HE and is consistent with the Prime Minister’s Labour Party conference commitment to expand access to tertiary education. The Review unsurprisingly endorses the new ‘V-level’ qualification, although T-levels and V-levels refer to concepts (technology, vocational education) which are not levels of study. The Review remains committed to T-levels, and was probably required to be, but its evidence here on take-up (just 3% of learners, p141) is a reminder about just how far there is to go to establish T-levels. But the fundamental point is that the future of widening participation at all levels of tertiary education depends on improving progression from level 2 to level 3.
The Curriculum Review is a frustrating document to read. It is complex, thorough in its analysis of evidence and has clearly been hemmed in by policy priorities in several directions. It is dogged and detailed, well-meaning and intelligent, realistic about the world as it is and cautious about radical change. It offers a nip here, a tuck there and a tweak in other places. Arguably, this is Starmerism in the form of education policy. Natalie Perera, CEO of the Education Policy Institute, which I chair, called it “a broadly sensible direction of travel” and that is right. The initial reaction of the Opposition that the review rates “learning about climate change as more important than learning to read and write” is clearly absurd.
Many of the Review recommendations are for government and others to do further work. That means that the real impact of the Review depends less on the recommendations themselves than on the combination of political will and strategic implementation to make things happen. The government’s immediate response, which has been to accept some of the recommendations, suggests that there is scope for a good deal of frustration as thinking turns to implementation. Higher education academics and institutions care a lot about the ways that the compulsory education system prepares young people for entry to higher education – even if that is not its main function. What they get from the review is a largely recognisable curriculum landscape, conservatism about assessment approaches, a national entitlement to a more modernised and flexible curriculum, and, above all, a strong focus on pathways into post-16 study.
HEPI Director, Nick Hillman, takes a first look at today’s Final Report from the Curriculum and Assessment Review.
It feels like Christmas has come early for policy nerds. At 6.01am this morning, we finally got sight of Building a world-class curriculum for all, the long-awaited report from the Government’s independent Curriculum and Assessment Review (CAR).
Overseen by Professor Becky Francis, who is an experienced educational leader and researcher and someone who also has a background in policy, it was commissioned when the Labour Government was facing brighter days back in their first flush.
The first thing to note about the report is that, in truth, independent reports commissioned by governments are only half independent. For example, the lead reviewer is usually keen to ensure their report lands on fertile soil (and, indeed, is usually chosen because they have some affinity to the people in charge). In addition, independent reviews are supported by established civil servants inside the machine and there is usually a conversation behind the scenes between the independent review team and those closest to ministers as the work progresses. (In higher education, for example, both the Browne and Augar reviews fit this model.) So it is no great surprise that the Government has accepted most of what Becky’s largely evidence-led team has said.
Yet anyone reading the press coverage of the CAR while it has been underway, or anyone who has seen the front page of today’s Daily Mail, which screams ‘LABOUR DUMBS DOWN SCHOOLS’, may wonder if the report that has landed today is the nightmare before Christmas rather than a welcome festive present. There is lots to like but the document also feels incomplete, especially – for example – for people with an interest in higher education. So it is perhaps best thought of as a present for which the batteries have yet to arrive.
Nonetheless, this morning I spoke at the always excellent University Admissions Conference hosted annually by the Headmasters’ and Headmistresses’ Conference (HMC) and the Girls’ School Association (GSA) and I could not help wondering aloud whether any new restraints on state-maintained schools might give our leading independent schools, who are much freer to teach what they like, an additional edge – especially as academy schools are already, even before today, having freedoms ripped from them.
What does the CAR say (and what does it not say)
But what does the review, which had a team of 11 beneath Becky (including one Vice-Chancellor in Professor Nic Beech and also Jo-Anne Baird from the Oxford University Centre for Educational Assessment) actually say?
The first thing to note is that it is much better than the Interim Report, which said little, sought to be all things to all people and read like it had been written by one of the better generative AI tools.
In terms of hard proposals, the Final Report starts and ends with older pupils, those aged 16 to 19, for whom we are told there should be ’a new third pathway at level 3 to sit alongside A-Levels and T Levels.’ If this feels familiar, it is because the Curriculum and Assessment Review’s emerging findings helped shape the recent Post-16 Education and Skills white paper and, more importantly, because there is already such a pathway populated by qualifications like BTECs.
So there is a sense of reinventing the wheel here, with (to mix my metaphors) politicians putting a new coat of paint on the current system. In many respects, the material on 16-to-19 pupils is the least interesting part of the report – especially as there is next to nothing at all on A-Levels. The review team starkly states, ‘we heard very little concern regarding A Levels in our Call for Evidence and our sector engagement’, so they basically ignore them – in a world of change, A Levels continue to sail steadfastly on.
As trailed in the newspapers, there is a recommendation for new ‘diagnostic Maths and English tests to be taken in Year 8.’ This would obviously help track progress between the tests taken at the end of primary school (in Year 6) and the public exams taken at age 16. But the idea has already prompted anger from trade unionists, almost guaranteeing that the benefits and downsides will be overegged in the inevitable political rows to come.
There are also numerous scattergun subject-by-subject recommendations. These are largely sensible (see, for example, the iideas on improving English GCSEs or the section on Science) but also a little unsatisfying. Some of the subject-specific changes are a little trite or inconsequential (like tweaks to the name of individual GCSEs) while others need much more detail than a general review of everything that happens between the ages of 11 and 19 is able to offer. Any material changes will need to be at a wholly different level of detail to what we have got today, and they will be some years away, so may make no difference to anyone already at secondary school.
Other points to note include that the Review is Gove-ian in its love of exams, which it stresses are a protection against the negatives of AI, over coursework. (I suspect Dennis Sherwood, the campaigner against grading inaccuracy will be incandescent about how the report appears to skate over some of the imperfections of how exams currently operate.) However, despite the support for exams, one of the crunchiest recommendations in the review is the proposal of a 10% reduction in ‘overall GCSE exam volume’, which we are told can happen without any significant downsides, though the tricky details are palmed off to Ofqual and others.
The English Baccalaureate and Progress 8
The one really clear place where Professor Francis’s review team and the Government, who have generally accepted the recommendations, are out of kilter with one another is on Progress 8.
Progress 8 is a school accountability measure that assesses how much ‘value-added’ progress occurs between primary school (SATs) and GCSEs. It is such a favoured measure that the Government has recently proposed a new Progress 8 measure for universities (which is a mad idea that wrongly assumes universities are just big schools – in reality, it is a defining feature of universities that they set their own curricula and are their own awarding body).
Becky Francis opposes the EBacc, which is a metric related to, but separate, from Progress 8, yet she wishes to maintain some vestiges of the EBacc within Progress 8. While the EBacc focuses specifically on how many students achieve qualifications in a list of specially favoured subject areas (English Lang and Lit, Maths, Sciences, Geography or History plus a language), the CAR recommends ‘the removal of the EBacc measures but the retention of the EBacc “bucket” in Progress 8 under the new title of “Academic Breadth”.’
This is something the Government is not running with, favouring less restrictions on Progress 8 instead, which may or may not reinvigorate some creative subjects. Yes, it is all exceptionally complicated but Schools Week have an excellent guide and the two pictures below (from Government sources) might help: the first shows the status quo on Progress 8 and is what Becky Francis wishes to maintain (though pillars 3, 4 and 5 would be renamed if she got her way); the second shows the Government’s proposal.
How does it fare?
Call me simple, but I was always going to judge the Curriculum and Assessment Review partly on the extent to which it tackled specific challenges that we have looked at closely at HEPI In recent years. Here the CAR is a mixed bag. On the positive side of the ledger, the review recommends more financial education, reflecting the polling we conducted to help inform the CAR’s work: when we asked undergraduates how well prepared they felt for higher education, 59% said they felt they should have had more education on finances and budgeting.
The most obvious problem that the CAR insufficiently addresses is the huge underperformance of boys. This issue usually gets a namecheck in Bridget Phillipson’s interviews but it was entirely ignored in the recent Post-16 white paper; in the CAR, it does at least receive a quick nod and just maybe some of the proposed curriculum changes will benefit boys more than girls. But there is more focus on class and other personal characteristics than sex and in the end the brief acknowledgement of boys’ underperformance does not lead to anything properly focused on the problem.
This is very strange for we simply cannot fix the inequalities in outcomes until we give the gaps in the attainment of boys and girls the attention they deserve. I am beginning to think I was wrong to be so hopeful that a female Secretary of State was more likely to focus on this issue than a male one (on the grounds that it would be less sensitive politically).
Another area where we at HEPI have been mildly obsessed is the catastrophic decline in language learning, as tracked for us by the Oxonian Megan Bowler. Here, as with boys, the new review is disappointing. In the section looking at welcome subject-by-subject changes, the recommendations on languages are both relatively tentative and relatively weak. As one linguist emailed me first thing this morning, ‘It is pretty remarkable that the CAR’s decision on languages runs exactly contrary to the best and consistent advice of the key language advisers on the issue’. However, the Government’s response goes a little further and Ministers promise to ‘explore the feasibility of developing a new qualification for languages that enables all pupils to have their achievements acknowledged when they are ready rather than at fixed points.’ We might not want languages always to be treated so differently from other subjects but I am still chalking that up as a win.
The CAR also ignores entirely one issue that is currently filling some MPs’ postbags – the defunding of the International Baccalaureate (IB). The IB delivers a broad curriculum for sixth-formers, is liked by highly selective universities and tackles the early specialisation which marks out our education system from those in many competitor nations. Back in the heady Blair years, Labour politicians loved the qualification and promised to bring it within touching distance of most young people.
As HEPI is a higher education body, it also feels incumbent upon me to point out that higher education is largely notable by its absence in the CAR, with universities being mentioned just nine times across the (almost) 200 pages and despite schools and colleges obviously being the main pipeline for new students. It is rather different from the days when universities were regarded as having a key direct role to play in designing what goes on in schools. Indeed, our exam boards tended to originate within universities.
The odd references to universities that do make it in to the CAR report are not especially illuminating. For example, more selective universities appear as part of the rationale for killing the EBacc ‘the evidence does not suggest that taking the EBacc combination of subjects increases the likelihood that students attend Russell Group universities.’ Universities also appear in the section on bolstering T Levels, with the review proposing ‘The Government should continue to promote awareness and understanding of T Levels to the HE sector.’ But that is about it.
Incidentally, there is also less in the report on extracurricular activities than the pre-publication press coverage might have led you to believe, even if the Government’s response to the review does focus on improving the offer here.
Trade-offs
Becky Francis used to head up the UCL Institute of Education (IoE), which is an institution that has always wrestled with excellence versus opportunity. Years ago, I sat in a learnèd IoE seminar on why university league tables are supposedly pernicious – but I had to walk past multiple banners boasting that the IoE was ‘Number 1’ in the world for studying education to get to the seminar and, while I was in the room, news came through that the IoE was going to cement its reputation and position by merging with UCL.
Such tension is a reminder that educational changes generally have trade-offs and the Executive Summary of the main CAR document admits: ‘All potential reforms to curriculum and assessment come with trade-offs’. Abolishing the EBacc as the CAR team want and watering down Progress 8 as the Department for Education want, might help some pupils and some disciplines while making the numbers we produce about ourselves look better – though the numbers produced by others about us (at places like the OECD) could come to tell a different story in time.
In the end, we have to recognise that there are only so many hours in the school day, only so many (ie not enough) teachers and only so much room in pupils’ lives, not to mention huge diversity among pupils, schools and staff, which together ensure there can be no perfect curriculum. More of one subject or more extracurricular activities are likely to mean less of other things because the school day is not infinitely expandable (and there is nothing here to free up teachers’ time or fill in all those teacher vacancies). Yet the school curriculum does need to be revised over time to ensure it remains fit for purpose.
The question now is whether the CAR report matters. Will we still be talking about it in 20 years time? Can a Government buffeted by all sides, facing a huge fiscal crisis and with a Secretary of State for Education who sometimes seems more focused on political battles (like the recent Deputy Leadership election of the Labour Party) than on engaging with the latest educational evidence really deliver Becky Francis’s vision? Or will the CAR’s proposals wilt as quickly as the last really big proposal for curriculum reform: Rishi Sunak’s British Baccalaureate? In all honesty, I am not certain but there are, in theory at least, four years of this Parliament left whereas Rishi Sunak spent more like four months pushing his idea.
My parting thought, however, is different. It is that, while the trade-offs in the CAR report partly just represent the facts of life in education, they do not entirely do so. Trade-offs are much trickier to deal with when you are also seeking to root out diversity of provision. And in the end, if there is one thing that marks this Government’s mixed approach to schooling out above all, it is the desire to make all schools more alike, whether that is reducing academy freedoms, micromanaging the rules on school uniforms, defunding the IB, forcing state schools to stop offering classical languages or pushing independent schools to the wall. Would it be better, and also make politicians’ lives easier, if we stopped pretending that the 700,000 kids in each school year group are more like one another than they really are?
Postscript: While the CAR paper is infinitely more digestible than the interim document, there is still some wonderful eduspeak, my favourite of which is:
A vocational qualification is aligned to a sector and is usually taught and assessed in an applied way. A technical qualification meanwhile has a direct alignment with an occupational standard. Despite the name ‘Technical Awards’, these qualifications are therefore vocational rather than technical.
When I ask apprentices to reflect on their learning in professional discussions, I often hear a similar story:
It wasn’t just about what I knew – it was how I connected it all. That’s when it clicked.
That’s the value of dialogic assessment. It surfaces hidden knowledge, creates space for reflection, and validates professional judgement in ways that traditional essays often cannot.
Dialogic assessment shifts the emphasis from static products – the essay, the exam – to dynamic, real-time engagement. These assessments include structured discussions, viva-style conversations, or portfolio presentations. What unites them is their reliance on interaction, reflection, and responsiveness in the moment.
Unlike “oral exams” of old, these conversations require learners to explain reasoning, apply knowledge, and reflect on lived experience. They capture the complex but authentic process of thinking – not just the polished outcome.
In Australia, “interactive orals” have been adopted at scale to promote integrity and authentic learning, with positive feedback from staff and students. Several UK universities have piloted viva-style alternatives to traditional coursework with similar results. What apprenticeships have long taken for granted is now being recognised more widely: dialogue is a powerful form of assessment.
Lessons from apprenticeships
In apprenticeships and work-based learning, dialogic assessment is not an add-on – it’s essential. Apprentices regularly take part in professional discussions (PDs) and portfolio presentations as part of both formative and end-point assessment.
What makes them so powerful? They are inclusive, as they allow different strengths to emerge. Written tasks may favour those fluent in academic conventions, while discussions reveal applied judgement and reflective thinking. They are authentic, in that they mirror real workplace activities such as interviews, stakeholder reviews, and project pitches. And they can be transformative – apprentices often describe PDs as moments when fragmented knowledge comes together through dialogue.
One apprentice told me:
It wasn’t until I talked it through that I realised I knew more than I thought – I just couldn’t get it down on paper.
For international students, dialogic assessment can also level the playing field by valuing applied reasoning over written fluency, reducing the barriers posed by rigid academic writing norms.
My doctoral research has shown that PDs not only assess knowledge but also co-create it. They push learners to prepare more deeply, reflect more critically, and engage more authentically. Tutors report richer opportunities for feedback in the process itself, while employers highlight their relevance to workplace practice.
And AI fits into this picture too. When ChatGPT and similar tools emerged in late 2022, many feared the end of traditional written assessment. Universities scrambled for answers – detection software, bans, or a return to the three-hour exam. The risk has been a slide towards high-surveillance, low-trust assessment cultures.
But dialogic assessment offers another path. Its strength is precisely that it asks students to do what AI cannot:
authentic reflection, as learners connect insights to their own lived experience.
real-time reasoning – learners respond to questions, defend ideas, and adapt on the spot.
professional identity, where the kind of reflective judgement expected in real workplaces is practised.
Assessment futures
Scaling dialogic assessment isn’t without hurdles. Large cohorts and workload pressures can make universities hesitant. Online viva formats also raise equity issues for students without stable internet or quiet environments.
But these challenges can be mitigated: clear rubrics, tutor training, and reliable digital platforms make it possible to mainstream dialogic formats without compromising rigour or inclusivity. Apprenticeships show it can be done at scale – thousands of students sit PDs every year.
Crucially, dialogic assessment also aligns neatly with regulatory frameworks. The Office for Students requires that assessments be valid, reliable, and representative of authentic learning. The QAA Quality Code emphasises inclusivity and support for learning. Dialogic formats tick all these boxes.
The AI panic has created a rare opportunity. Universities can either double down on outdated methods – or embrace formats that are more authentic, equitable, and future-oriented.
This doesn’t mean abandoning essays or projects altogether. But it could mean ensuring every programme includes at least one dialogic assessment – whether a viva, professional discussion, or reflective dialogue.
Apprenticeships have demonstrated that dialogic assessments are effective. They are rigorous, scalable, and trusted. Now is the time for the wider higher education sector to recognise their value – not as a niche alternative, but as a core element of assessment in the AI era.
Although the Next Generation Science Standards (NGSS) were released more than a decade ago, adoption of them varies widely in California. I have been to districts that have taken the standards and run with them, but others have been slow to get off the ground with NGSS–even 12 years after their release. In some cases, this is due to a lack of funding, a lack of staffing, or even administrators’ lack of understanding of the active, student-driven pedagogies championed by the NGSS.
Another potential challenge to implementing NGSS with fidelity comes from teachers’ and administrators’ epistemological beliefs–simply put, their beliefs about how people learn. Teachers bring so much of themselves to the classroom, and that means teaching in a way they think is going to help their students learn. So, it’s understandable that teachers who have found success with traditional lecture-based methods may be reluctant to embrace an inquiry-based approach. It also makes sense that administrators who are former teachers will expect classrooms to look the same as when they were teaching, which may mean students sitting in rows, facing the front, writing down notes.
Based on my experience as both a science educator and an administrator, here are some strategies for encouraging both teachers and administrators to embrace the NGSS.
For teachers: Shift expectations and embrace ‘organized chaos’
A helpful first step is to approach the NGSS not as a set of standards, but rather a set of performance expectations. Those expectations include all three dimensions of science learning: disciplinary core ideas (DCIs), science and engineering practices (SEPs), and cross-cutting concepts (CCCs). The DCIs reflect the things that students know, the SEPs reflect what students are doing, and the CCCs reflect how students think. This three-dimensional approach sets the stage for a more active, engaged learning environment where students construct their own understanding of science content knowledge.
To meet expectations laid out in the NGSS, teachers can start by modifying existing “recipe labs” to a more inquiry-based model that emphasizes student construction of knowledge. Resources like the NGSS-aligned digital curriculum from Kognity can simplify classroom implementation by providing a digital curriculum that empowers teachers with options for personalized instruction. Additionally, the Wonder of Science can help teachers integrate real-life phenomena into their NGSS-aligned labs to help provide students with real-life contexts to help build an understanding of scientific concepts related to. Lastly, Inquiry Hub offers open-source full-year curricula that can also aid teachers with refining their labs, classroom activities, and assessments.
For these updated labs to serve their purpose, teachers will need to reframe classroom management expectations to focus on student engagement and discussion. This may mean embracing what I call “organized chaos.” Over time, teachers will build a sense of efficacy through small successes, whether that’s spotting a studentconstructing their own knowledge or documenting an increased depth of knowledge in an entire class. The objective is to build on student understanding across the entire classroom, which teachers can do with much more confidence if they know that their administrators support them.
For administrators: Rethink evaluations and offer support
Arecent survey found that 59 percent of administrators in California, where I work, understood how to support teachers with implementing the NGSS. Despite this, some administrators may need to recalibrate their expectations of what they’ll see when they observe classrooms. What they might see is organized chaos happening: students out of their seats, students talking, students engaged in all different sorts of activities. This is what NGSS-aligned learning looks like.
To provide a clear focus on student-centered learning indicators, they can revise observation rubrics to align with NGSS, or make their lives easier and use this one. As administrators track their teachers’ NGSS implementation, it helps to monitor their confidence levels. There will always be early implementers who take something new and run with it, and these educators can be inspiring models for those who are less eager to change.
The overall goal for administrators is to make classrooms safe spaces for experimentation and growth. The more administrators understand about the NGSS, the better they can support teachers in implementing it. They may not know all the details of the DCIs, SEPs, and CCCs, but they must accept that the NGSS require students to be more active, with the teacher acting as more of a facilitator and guide, rather than the keeper of all the knowledge.
Based on my experience in both teaching and administration roles, I can say that constructivist science classrooms may look and sound different–with more student talk, more questioning, and more chaos. By understanding these differences and supporting teachers through this transition, administrators ensure that all California students develop the deeper scientific thinking that NGSS was designed to foster.
Nancy Nasr, Ed.D., Santa Paula Unified School District
Nancy Nasr is a science educator and administrator at Santa Paula Unified School District. She can be reached at [email protected].
Latest posts by eSchool Media Contributors (see all)
Higher education in the UK has a solid background in leveraging scale in purchasing digital content and licenses through Jisc. But when it comes to purchasing specific technology platforms higher education institutions have tended to go their own way, using distinct specifications tailored to their specific needs.
There are some benefits to this individualistic approach, otherwise it would not have become the status quo. But as the Universities UK taskforce on transformation and efficiency proclaims a “new era of collaboration” some of the long standing assumptions about what can work in a sharing economy are being dusted off and held up to the light to see if they still hold. Efficiency – including finding ways to realise new forms of value but with less overall resource input – is no longer a nice to have; it’s essential for the sector to remain sustainable.
At Jisc, licensing manager Hannah Lawrence is thinking about the ways that the sector’s digital services agency can build on existing approaches to collective procurement towards a more systematic collaboration, specifically, in her case, exploring ideas around a collaborative route to procurement for technology that supports assessment and feedback. Digital assessment is a compelling area for possible collaboration, partly because the operational challenges are fairly consistent between institutions – such as exam security, scalability, and accessibility – but also because of the shared pedagogical challenge of designing robust assessments that take account of the opportunities and risks of generative AI technology.
The potential value in collaboration isn’t just in cost savings – it’s also about working together to test and pilot approaches, and share insight and good practice. “Collaboration works best when it’s built on trust, not just transaction,” says Hannah. “We’re aiming to be transparent and open, respecting the diversity of the sector, and making collaboration sustainable by demonstrating real outcomes and upholding data handling standards and ethics.” Hannah predicts that it may take several years to develop an initial iteration of joint procurement mechanism, in collaboration with a selection of vendors, recognising that the approach could evolve over years to offer “best on class” products at a competitive price to institutions who participate in collective procurement approaches.
Reviewing the SIKTuation
One way of learning how to build this new collaborative approach is to look to international examples. In Norway, SIKT is the higher education sector’s shared services agency. SIKT started with developing a national student information system, and has subsequently rolled out, among other initiatives, national scientific and diploma archives, and a national higher education application system – and a national tender for digital assessment.
In its first iteration, when the technology for digital assessment was still evolving, three different vendors were appointed, but in the most recent version, SIKT appointed one single vendor – UNIwise – as the preferred supplier for digital assessment for all of Norwegian higher education. Universities in Norway are not required to follow the SIKT framework, of course, but there are significant advantages to doing so.
“Through collaboration we create a powerful lobby,” says Christian Moen Fjære, service manager at SIKT. “By procuring for 30,000 staff and 300,000 students we can have a stronger voice and influence with vendors on the product development roadmap – much more so than any individual university. We can also be collectively more effective in sharing insight across the network, like sample exam questions, for example.” SIKT does not hold views about how students should be taught, but as pedagogy and technology become increasingly intertwined, SIKT’s discussions with vendors are typically informed by pedagogical developments. Christian explains, “You need to know what you want pedagogically to create the specification for the technical solution – you need to think what is best for teaching and assessment and then we can think how to change software to reflect that.”
For vendors, it’s obviously great to be able to sell your product at scale in this way but there’s more to it than that – serving a critical mass of buyers gives vendors the confidence to invest in developing their product, knowing it will meet the needs of their customers. Products evolve in response to long-term sector need, rather than short-term sales goals.
SIKT can also flex its muscles in negotiating favourable terms with vendors, and use its expertise and experience to avoid pitfalls in negotiating contracts. A particularly pertinent example is on data sharing, both securing assurances of ethical and anonymous sharing of assessment data, and clarity about ultimate ownership of the data. Participants in the network can benefit from a shared data pool, but all need to be confident both that the data will be handled appropriately and that ultimately it belongs to them, not the vendor. “We have baked into the latest requirements the ability to claw back data – we didn’t have this before, stupid, right?” says Christian. “But you learn as the needs arise.”
Difference and competition
In the UK context, the sector needs reassurance that diversity will be accommodated – there’s a wariness of anything that looks like it might be a one-size-fits-all model. While the political culture in Norway is undoubtedly more collectivist than in the UK, Norwegian higher education institutions have distinct missions, and they still compete for prestige and to recruit the best students and staff.
SIKT acknowledges these differences through a detailed consultation process in the creation of national tenders – a “pre-project” on the list of requirements for any technology platform, followed by formal consultation on the final list, overseen by a steering group with diverse sector representation. But at the end of the day to realise the value of joining up, there does need to be some preparedness to compromise, or to put it another way, to find and build on areas of similarity rather than over-refining on what can often be minor differences. Having a coordinating body like SIKT convene the project helps to navigate these issues. And, of course, some institutions simply decide to go another way, and pay more for a more tailored product. There is nothing stopping them from doing so.
As far as SIKT is concerned, competition between institutions is best considered in the academic realm, in subjects and provision, as that is what benefits the student. For operations, collaboration is more likely to deliver the best results for both institutions and students. But SIKT remains agnostic about whether specific institutions have a different view. “We don’t at SIKT decide what counts as competitive or not,” says Christian. “Universities will decide for themselves whether they want to get involved in particular frameworks based on whether they see a competitive advantage or some other advantage from doing so.”
The medium term horizon for the UK sector, based on current discussions, is a much more networked approach to the purchase and utilisation of technology to support learning and teaching – though it’s worth noting that there is nothing stopping consortia of institutions getting together to negotiate a shared set of requirements with a particular vendor pending the development of national frameworks. There’s no reason to think the learning curve even needs to be especially steep – while some of the technical elements could require a bit of thinking through, the sector has a longstanding commitment to sharing and collaboration on high quality teaching and learning, and to some extent what’s being talked about right now is mostly about joining the dots between one domain and another.
This article is published in association with UNIwise. For further information about UNIwise and the opportunity to collaborate contact Tim Peers, Head of Partnerships.
Students from the class of 2024 had historically low scores on a major national test administered just months before they graduated.
Results from the National Assessment of Educational Progress, or NAEP, released September 9, show scores for 12th graders declined in math and reading for all but the highest performing students, as well as widening gaps between high and low performers in math. More than half of these students reported being accepted into a four-year college, but the test results indicate that many of them are not academically prepared for college, officials said.
“This means these students are taking their next steps in life with fewer skills and less knowledge in core academics than their predecessors a decade ago, and this is happening at a time when rapid advancements in technology and society demand more of future workers and citizens, not less,” said Lesley Muldoon, executive director of the National Assessment Governing Board. “We have seen progress before on NAEP, including greater percentages of students meeting the NAEP proficient level. We cannot lose sight of what is possible when we use valuable data like NAEP to drive change and improve learning in U.S. schools.”
In a statement, Education Secretary Linda McMahon said the results show that federal involvement has not improved education, and that states should take more control.
“If America is going to remain globally competitive, students must be able to read proficiently, think critically, and graduate equipped to solve complex problems,” she said. “We owe it to them to do better.”
The students who took this test were in eighth grade in March of 2020 and experienced a highly disrupted freshman year of high school because of the pandemic. Those who went to college would now be entering their sophomore year.
Roughly 19,300 students took the math test and 24,300 students took the reading test between January and March of 2024.
The math test measures students’ knowledge in four areas: number properties and operations; measurement and geometry; data analysis, statistics, and probability; and algebra. The average score was the lowest it has been since 2005, and 45% of students scored below the NAEP Basic level, even as fewer students scored at NAEP Proficient or above.
NAEP Proficient typically represents a higher bar than grade-level proficiency as measured on state- and district-level standardized tests. A student scoring in the proficient range might be able to pick the correct algebraic formula for a particular scenario or solve a two-dimensional geometric problem. A student scoring at the basic level likely would be able to determine probability from a simple table or find the population of an area when given the population density.
Only students in the 90th percentile — the highest achieving students — didn’t see a decline, and the gap between high- and low-performing students in math was higher than on all previous assessments.
This gap between high and low performers appeared before the pandemic, but has widened in most grade levels and subject areas since. The causes are not entirely clear but might reflect changes in how schools approach teaching as well as challenges outside the classroom.
Testing officials estimate that 33% of students from the class of 2024 were ready for college-level math, down from 37% in 2019, even as more students said they intended to go to college.
In reading, students similarly posted lower average scores than on any previous assessment, with only the highest performing students not seeing a decline.
The reading test measures students’ comprehension of both literary and informational texts and requires students to interpret texts and demonstrate critical thinking skills, as well as understand the plain meaning of the words.
A student scoring at the basic level likely would understand the purpose of a persuasive essay, for example, or the reaction of a potential audience, while a students scoring at the proficient level would be able to describe why the author made certain rhetorical choices.
Roughly 32% of students scored below NAEP Basic, 12 percentage points higher than students in 1992, while fewer students scored above NAEP Proficient. An estimated 35% of students were ready for college-level work, down from 37% in 2019.
In a survey attached to the test, students in 2024 were more likely to report having missed three or more days of school in the previous month than their counterparts in 2019. Students who miss more school typically score lower on NAEP and other tests. Higher performing students were more likely to say they missed no days of school in the previous month.
Students in 2024 were less likely to report taking pre-calculus, though the rates of students taking both calculus and algebra II were similar in 2019 and 2024. Students reported less confidence in their math abilities than their 2019 counterparts, though students in 2024 were actually less likely to say they didn’t enjoy math.
Students also reported lower confidence in their reading abilities. At the same time, higher percentages of students than in 2024 reported that their teachers asked them to do more sophisticated tasks, such as identifying evidence in a piece of persuasive writing, and fewer students reported a low interest in reading.
Chalkbeat is a nonprofit news site covering educational change in public schools.
This HEPI guest blog was kindly authored by Pamela Baxter, Chief Product Officer (English) at Cambridge University Press & Assessment.Cambridge University Press & Assessment are a partner of HEPI.
UK higher education stands at a crossroads: one of our greatest exports is at risk. Financial pressures are growing. International competition for students is more intense than ever. As mentioned in Cambridge’s written evidence to the Education Select Committee’s Higher Education and Funding: Threat of Insolvency and International Students inquiry, one of the crucial levers for both quality and stability is how we assess the English language proficiency of incoming international students. This will not only shape university finances and outcomes but will have serious implications for the UK’s global reputation for educational excellence.
The regional and national stakes
The APPG for International Students’ recent report, The UK’s Global Edge, Regional Impact and the Future of International Students, makes clear that the flow of international students is not only a localised phenomenon. Their presence sustains local economies and drives job creation in regions across the UK. They help deliver on the Government’s wider ambitions for creating opportunities for all by bringing investment and global connectivity to towns and cities. Their impact also stretches to the UK’s position on the world stage, as recruitment and academic exchange reinforce our soft power and bolster innovation.
International students bring nearly £42 billion to the UK economy each year, the equivalent of every citizen being around £560 better off. International talent is embedded in key sectors of life across the nations, with almost one in five NHS staff coming from outside the UK and more than a third of the fastest-growing UK start-ups founded or co-founded by immigrants. As HEPI’s most recent soft power index showed, 58 serving world leaders received higher education in the UK.
The value of higher education is rising
According to the OECD’s Education at a Glance 2025 report – recently launched in the UK in collaboration with HEPI and Cambridge University Press & Assessment – higher education is delivering greater benefits than ever. Nearly half of young adults in OECD countries now complete tertiary education. The returns for individuals and societies in terms of employment, earnings and civic participation are substantial. But when attainment in higher education is so valuable, deficiencies in the preparation of students – including inadequate English language skills – can have considerable costs.
Why robust testing matters
Robust English language testing is, therefore, fundamental. It ensures that international students can fully participate in academic life and succeed in their chosen courses. It also protects universities from the costs that arise when students are underprepared.
The evidence is clear that not all tests provide the same level of assurance. Regulated secure English language tests such as IELTS have demonstrated reliability and validity over decades. By contrast, newer and under-regulated at-home tests have been linked to weaker student outcomes. A recent peer-reviewed study in the ELT Journal found that students admitted on the basis of such tests often struggled with the academic and communicative demands of their courses.
The HOELT moment
The proposed introduction of a Home Office English Language Test (HOELT) raises the stakes still further. The Home Office has indicated an interest in at-home invigilation. While innovation of this kind may appear to offer greater convenience, it also risks undermining quality, fairness and security. The HOELT process must be grounded in evidence, setting high minimum standards and ensuring robust protections against misuse. High-stakes decisions such as the creation of HOELT should not be driven by cost or convenience alone. They should be driven, instead, by whether the system enables talented students to succeed in the UK’s competitive academic environment, while safeguarding the country’s immigration processes.
Conclusion: Sustaining and supporting international student success
International students enhance the UK’s educational landscape, bolster the UK’s global reputation and contribute to long-term growth and prosperity. But the benefits they bring are not guaranteed. Without trusted systems for English language assessment, we risk undermining the very conditions that allow them to thrive and contribute meaningfully.
As the Government pursues the creation of its own HOELT, it has a unique opportunity to ensure policy is evidence-led and quality-driven. Doing so will not only safeguard students and UK universities but will also reinforce the UK’s standing as a world leader in higher education.
Your chance to engage: Join Cambridge University Press & Assessment and HEPI at Labour Party Conference 2025
These and other issues will be explored in greater detail at Cambridge University Press & Assessment’s forthcoming event in partnership with HEPI at the Labour Party Conference 2025, where policymakers and sector leaders will come together to consider how to secure and strengthen UK higher education on a global stage.