Blog

  • Rural teacher shortages could get worse thanks to Trump’s visa fee

    Rural teacher shortages could get worse thanks to Trump’s visa fee

    by Ariel Gilreath, The Hechinger Report
    November 7, 2025

    HALIFAX COUNTY, N.C.When Ivy McFarland first traveled from her native Honduras to teach elementary Spanish in North Carolina, she spent a week in Chapel Hill for orientation. By the end of that week, McFarland realized the college town on the outskirts of Raleigh was nowhere near where she’d actually be teaching.

    On the car ride to her school district, the city faded into the suburbs. Those suburbs turned into farmland. The farmland stretched into more farmland, until, two hours later, she made it to her new home in rural Halifax County.

    “I was like, ‘Oh my God, this is far,’” McFarland said. “It was shocking when I got here, and then I felt like I wanted to go back home.”

    Nine years later, she’s come to think of Halifax County as home.

    In this stretch of rural North Carolina, teachers hail from around the globe: Jamaica, the Philippines, Honduras, Guyana. Of the 17 teachers who work at Everetts Elementary School in the Halifax County school district, two are from the United States. 

    In this rural school district surrounded by rural school districts, recruiting teachers has become a nearly impossible task. With few educators applying for jobs, schools like Everetts Elementary have relied on international teachers to fill the void. Districtwide, 101 of 156 educators are international. 

    “We’ve tried recruiting locally, and it just has not worked for us,” said Carolyn Mitchell, executive director of human resources in the eastern North Carolina district of about 2,100 students. “Halifax is a rural area, and a lot of people just don’t want to work in rural areas. If they’re not people who are from here and want to return, it’s challenging.” 

    Around the country, many rural schools are contending with a shortage of teacher applicants that has ballooned into a crisis in recent years. Fewer students are enrolling in teacher training programs, leading to a shrinking pipeline that’s made filling vacancies one of the most challenging problems for school leaders to solve in districts with smaller tax bases and fewer resources than their suburban and urban peers. In certain grade levels and subject areas — like math and special education positions — the challenge is particularly acute. Now, some of the levers rural schools have used to boost their teacher recruitment efforts are also disappearing.

    This spring, the federal Department of Education eliminated teacher residency and training grants for rural schools. In September, President Donald Trump announced a $100,000 fee on new H-1B visa applications — visas hundreds of schools like Everetts Elementary use to hire international teachers for hard-to-staff positions — saying industries were using the visas to replace American workers with “lower-paid, lower-skilled labor.” A lawsuit filed by a coalition of education, union, nonprofit and other groups is challenging the fee, citing teacher shortages. Rural schools are also bracing for more cuts to federal funding next year.

    “We’re not only talking about a recruitment and retention problem. We’re talking about the collapse of the rural teacher workforce,” said Melissa Sadorf, executive director of the National Rural Education Association.

    Related: Become a lifelong learner. Subscribe to our free weekly newsletter featuring the most important stories in education. 

    Most of Halifax’s international teachers arrive on H-1B visas, which allow them to work in the U.S. for about five years with the possibility of a green card at the end of that period. About one-third of the district’s international teachers have J-1 visas, which let them work in the country for three years with the possibility of renewing it for two more. At the end of those five years, educators on J-1 visas are required to return to their home countries.

    A few years ago, Halifax County Schools decided to shift from hiring teachers on J-1 visas in favor of H-1B, hoping it would reduce teacher turnover and keep educators in their classrooms for longer. The results have been mixed, Mitchell said, because within a few years, some of their teachers ended up transferring to bigger, higher-paying districts anyway. 

    There are trade-offs for the teachers, too. Mishcah Knight came to the U.S. from Jamaica both to expand her skills and increase her pay as an educator. In the rural North Carolina county, finding transportation has been the biggest challenge for Knight, who teaches second grade. 

    She lacks a credit history needed to buy a car, leaving her reliant on carpooling to work. A single taxi driver serves the area, which doesn’t have public transit, Uber or Lyft. “Sometimes, he’s in Virginia,” Knight said. “It’s lucky when we actually get him to take us somewhere.”

    Being away from family also takes its toll on teachers. Nar Bell Dizon, who has taught music at Everetts Elementary since 2023, had to leave his wife and son back home in the Philippines. He visits in the summer, but during the school year, he sees them only through video calls. 

    “This is what life is — not everything is smooth,” Dizon said. “There will always be struggles and sacrifices.”

    Dizon’s first year in Everetts Elementary School was hard — it took time adapting to a different teaching style and classroom management. Now that he’s in his third year, he feels like he’s gotten his feet beneath him. 

    “When you can build a rapport with your students, things become easier,” Dizon said.

    When her international teachers are able to stay for longer, the students perform better, said Chastity Kinsey, principal of Everetts Elementary. “I know the benefit the teachers bring to the classroom,” Kinsey said. “After the first year or two, they normally take off like rock stars.” 

    Related: Trump’s cuts to teacher training leave rural school districts, aspiring educators in the lurch 

    Trump’s new fee does not address any of the challenges the Halifax district had with the H-1B visa, and it effectively slams the door on future hires. Now, the district will have to rely on J-1 visas to recruit new international teachers, meaning the educators will have to leave just as they’ve acclimated to their classrooms.

    “We just can’t afford to,” Mitchell said of paying the $100,000 fee. Other districts, she said, might turn to waivers allowing them to increase class sizes and hire fewer teachers, among other strategies.

    Since the applicant pool began drying up about a decade ago, the make-up of the district’s teaching staff has slowly shifted to international teachers. 

    At the heart of the problem is that when a position opens up, few, if any, citizens apply, said Katina Lynch, principal of Aurelian Springs Institute of Global Learning, an elementary school in Halifax County. 

    When Lynch had to hire a new fourth grade teacher this summer, she received three applications: Only one was a licensed teacher from the U.S.

    Nationally, about 1 in 8 teaching positions are either vacant or filled by teachers who are not certified for the position, according to data from the nonprofit Learning Policy Institute, published in July. In addition to fewer college students graduating with degrees in education, diminished public perception of the teaching profession and political polarization of schools are to blame, school leaders said. In some states, the growth of charter and private school options has made competing for teachers even harder. On top of a widening pay gap between rural and urban districts, it’s a perfect storm for schools in more remote parts of the country, said Sadorf.

    In rural Bunker Hill, Illinois, where more than 500 students attend two schools, some positions have gone unfilled for years. “We’ve posted for a school psychologist for years, never had anybody apply. We posted for a special ed teacher — have not had anybody apply. We’ve posted for a high school math teacher two years in a row,” said Superintendent Todd Dugan. “No applicants.”

    As a result, students often end up with a long-term substitute or an unlicensed student teacher. 

    When teachers do arrive in the district, Dugan works hard to try to get them to stick around. He pairs new teachers with experienced mentors, and uses federal funding to help those who want master’s degrees to afford them. 

    He also formed a calendar committee to give teachers input on which days they get off during the year. “More than pay, having at least a little bit of involvement, control and say in your work environment will cause people to stay,” said Dugan. It seems to be working: Bunker Hill’s teacher retention rate is more than 92 percent. 

    Related: Schools confront a new reality: They can’t count on federal money 

    Schools across the country face the same challenges to varying degrees. Several years ago, the Everett Area School District in southern Pennsylvania would receive 30 to 50 applications for a given position at its elementary schools, Superintendent Dave Burkett said. Now, they’re lucky if they get three or four.

    Last year, the district learned that a middle school science teacher would retire that summer. Just three people applied for the opening, and only one was certified for the role.

    “We offered the job before that person even left the building,” Burkett said. The candidate accepted it, but when it was time to fill out paperwork that summer, the teacher had taken a different job in a bigger district.

    One way Burkett has tried to address the shortage is to hire a permanent, full-time substitute teacher in each of its buildings. If a vacancy opens up that they haven’t been able to fill, the full-time substitute can step in until a permanent replacement is found. The permanent substitute makes more than a traditional sub and also receives health insurance. 

    Sadorf, with the National Rural Education Association, says other ways to help include introducing students to teacher training pathways starting in high school, building “grow-your-own” programs to train local people for teaching jobs, and offering loan forgiveness and housing support.

    Sadorf’s organization is in favor of creating an educator-specific visa track that would allow international teachers to be in communities for longer. The group is also in favor of exempting schools from the $100,000 H-1B fee. “Stabilizing federal support is something that really needs to be focused on at the federal level,” Sadorf said.

    At Everetts Elementary in Halifax County, McFarland, the educator from Honduras, is among the most senior teachers in the school. She has adapted to the rural community, where she met and fell in love with her now-husband. She gets asked sometimes why she hasn’t moved to a bigger city.

    “Education has taken me places I’ve never expected,” McFarland said. “For me, being here, there’s a reason for it. I see the difference I can make.”

    Contact staff writer Ariel Gilreath on Signal at arielgilreath.46 or at [email protected].

    This story about the visa fee was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for the Hechinger newsletter.

    This <a target=”_blank” href=”https://hechingerreport.org/federal-policies-risk-worsening-an-already-dire-rural-teacher-shortage/”>article</a> first appeared on <a target=”_blank” href=”https://hechingerreport.org”>The Hechinger Report</a> and is republished here under a <a target=”_blank” href=”https://creativecommons.org/licenses/by-nc-nd/4.0/”>Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src=”https://i0.wp.com/hechingerreport.org/wp-content/uploads/2018/06/cropped-favicon.jpg?fit=150%2C150&amp;ssl=1″ style=”width:1em;height:1em;margin-left:10px;”>

    <img id=”republication-tracker-tool-source” src=”https://hechingerreport.org/?republication-pixel=true&post=113282&amp;ga4=G-03KPHXDF3H” style=”width:1px;height:1px;”><script> PARSELY = { autotrack: false, onload: function() { PARSELY.beacon.trackPageView({ url: “https://hechingerreport.org/federal-policies-risk-worsening-an-already-dire-rural-teacher-shortage/”, urlref: window.location.href }); } } </script> <script id=”parsely-cfg” src=”//cdn.parsely.com/keys/hechingerreport.org/p.js”></script>

    Source link

  • From the Classroom to the Career Office: Why Career Readiness Belongs in Every Discipline – Faculty Focus

    From the Classroom to the Career Office: Why Career Readiness Belongs in Every Discipline – Faculty Focus

    Source link

  • From the Classroom to the Career Office: Why Career Readiness Belongs in Every Discipline – Faculty Focus

    From the Classroom to the Career Office: Why Career Readiness Belongs in Every Discipline – Faculty Focus

    Source link

  • The time for change is now: reducing pension costs in post-92 universities

    The time for change is now: reducing pension costs in post-92 universities

    This blog was kindly authored by Jane Embley, Chief People Officer and Tom Lawson, Deputy Vice-Chancellor and Provost, both of Northumbria University.

    It is welcome that the government’s recent white paper acknowledges the very real funding pressures on the university sector and outlines some measures to address them. It is rather disappointing, however, that one of the causes of that financial pressure recognised by both employers and trade unions – is somewhat sidestepped – namely the crisis in the post-92 institutions caused by the Teachers’ Pension Scheme (TPS). While the government has pledged to better understand the problem, this will presumably lead to a period of consultation before any new proposals come forward. The cost of TPS compounds the financial difficulty of many institutions, and the severity of the current situation means the moment for change is now.

    The TPS cost crisis

    At the beginning of 2025, we wrote a piece for this website that outlined the problem in general terms, and particularly, for Northumbria University. To briefly summarise, post-92 institutions are all required to enrol their staff who are engaged in teaching in TPS. The cost of TPS for employers (and employees) is rising, and having historically been similar to other pension schemes in the sector is now much more expensive than schemes such as the Universities Superannuation Scheme (USS) or the local Government Pension Scheme (LGPS). TPS employer contributions are now 28.68% whereas for USS they are 14.5%, and for Northumbria’s LGPS fund are 18.5%.

    This means that for an academic salary of £57,500, in addition to NI costs, the employer pension cost is £8,300 per annum for USS, but for a TPS employee it is £16,500. Put simply, it is now considerably more expensive to employ a member of staff to do the same job in one part of the sector than another.

    The figures are striking. For every 1,000 staff, an institution would face more than £8M per annum of additional costs if their colleagues were members of TPS rather than USS. For Northumbria, given the number of colleagues we have in TPS, the additional cost of this scheme compared to USS is more than £11M per annum. To put it another way, the fees of more than 800 Northumbria students are fully consumed by paying the additional cost of TPS, versus USS.

    Why alternatives fall short

    There are ways that universities can find alternatives to TPS – institutions can take steps to employ their academic staff via subsidiary companies and reduce pension costs by using defined contribution schemes. This has multiple disadvantages for individuals as well as institutions – not least because colleagues employed by that mechanism are not counted within the HESA return, for example, and as such are not eligible for participation in the Research Excellence Framework or for Research Council funding. As such, colleagues employed via such mechanisms cannot fully contribute across teaching and research and may find it difficult to progress their careers or move between institutions in the future.

    At Northumbria, as a research-intensive institution, we did not consider the above to be a path we could take. As there are no clear proposals forthcoming from government we have had to seek recourse to a different solution.

    Northumbria’s strategic response

    As we predicted in our previous blog, individual institutions have no choice but to take control of the total cost of employment. Since then, at Northumbria, we have been thinking about how we might do just that. We have settled on an approach that follows a three-part solution, something which we believe offers flexibility and choice while managing the University’s pension costs down to an acceptable level in the medium to long term.  

    First, we are offering colleagues in TPS an attractive alternative – the main pension scheme in the sector, USS, following a recent agreement to change our membership terms. Over 200 colleagues at Northumbria are already members (having joined Northumbria with existing membership), and going forward, USS membership will be available to all our academic colleagues. Of course, we acknowledge that there are differences in the membership benefits of each scheme. USS is a hybrid scheme with defined benefits up to a threshold and then defined contributions beyond that. TPS is a career average defined benefit scheme. We will help our TPS members with this transition by providing personalised, independent financial information and guidance, as pensions are complex and any decision to move from TPS to USS will need careful consideration.

    However, we do need to be confident that we can address the very high cost of TPS employer pension contributions, and have recently begun discussions within our university about moving to a total reward approach to remuneration.

    Using the two pension schemes, we want to provide colleagues with the choice as to how much of their total reward they receive as income now and how much we pay in pension contributions.

    For each grade point in our pay structure, we are aiming to establish a reward envelope, based on the total cost of salary plus employer pension contributions, reflecting USS rather than TPS rates. As such, a colleague remaining in TPS would have no reduction in their salary, although they will, initially, have a total reward package that exceeds the envelope for their grade point.

    Our goal will be to increase the total reward envelope for each grade point each year by the value of the pay award determined via national collective pay bargaining. In this model, the cost of the total reward envelope will be the same, but colleagues will be able to choose how they construct their reward package based on their own personal preference or circumstances. Salaries for colleagues who are members of USS will increase in line with the rest of the sector. Those colleagues who choose to remain in TPS will not see an increase in their take-home pay, as this, plus the cost of their pension contributions, exceeds the envelope for their grade point. However, over time, when the value of the total reward envelope for colleagues in USS and TPS has equalised, the salaries for those choosing TPS will increase again.

    Looking ahead: a fairer, sustainable future

    We understand that many of our colleagues might find this change unpalatable; however, we feel the additional monthly cost of almost £1M cannot be justified. While to some this will be controversial, ultimately, our proposed approach will mean that over time (likely to be up to seven years) the reward envelope (or cost) for USS and TPS employees will have equalised and as such we will have eliminated the differential costs of employing these two groups of colleagues undertaking the same roles, and be on an equal footing with other universities.

    We anticipate that by adopting this approach USS will, in time, become the normalised pension scheme for our academic staff, as it already is across the pre-92 universities. Along with competitive pay, colleagues will be members of an attractive sector-wide scheme, with lower personal contribution levels resulting in higher take-home pay. Of course, we will keep the whole approach under review as the employer pension contribution rates change over time, and we will be actively engaging with our colleagues over the coming months to seek their views on our proposal and to shape our future plans.  

    Finally, we are also encouraging our colleagues to consider carefully whether to opt out of TPS and join USS now. In order to gain traction and make earlier progress, we are offering existing salaried staff in TPS the choice to move early, with the University recognising this decision via a one-off payment, which shares the longer-term financial benefit of this with the University. Colleagues may receive the value of the savings made over the first year – typically between £5,800 and a maximum of £10,000 – as a taxable payment or via a payment into their pension, subject to a number of conditions in relation to their future employment.

    As we have outlined, the time for change is now, and we cannot wait for the outcome of a consultation or for the government to decide how it will seek to address this obvious disparity in the sector. Ultimately, we believe that moving towards a total reward approach, as outlined above, is advantageous for both the University and for our colleagues. It provides choice – no one will be forced to leave TPS, and as such, colleagues can continue to choose to receive the benefits of that scheme by more of their total reward being paid in pension contributions than salary. Or colleagues can choose to access more of their total income now in their salary, while joining a hybrid pension scheme that is already in place across the sector and which delivers defined benefits, and defined contribution benefits for higher earners. We believe that this is a novel approach to what has been, for some time, an intractable problem in the sector.

    Source link

  • Higher education postcard: Keble College, Oxford

    Higher education postcard: Keble College, Oxford

    Greetings from Oxford!

    Let me start with an uncontroversial statement: the nineteenth century was very different to the current century. As L P Hartley had it, “the past is a foreign country; they do things differently there.”

    One thing going on in that century was a reform movement within Anglicanism. The Tractarians, also known as the Oxford Movement (so called because it was centred upon Oxford) was a group of Anglicans who sought to move the Church of England closer to Roman Catholicism on some matters (yes, I know this is very simplified version). Among the leading figures – alongside John Henry Newman, now St John Newman – was John Keble.

    But religious controversy wasn’t the only thing on Oxford’s mind. Substantial reform of the university was under way, with changes to governance, a reduction of influence of the church, and a recognition of the need to widen access, to use the modern term. One avenue being explored was the creation of a new, more affordable, college. The committee working on this included Professor Pusey, a fellow Tractarian. He showed the plans to Keble, who was very much in favour. And then Keble died.

    His friends discussed what to do in tribute, and decided, as you do, that founding the college which Pusey had been discussing was the right thing. And so an appeal was launched, funds were raised, and the project progressed.

    It’s worth noting that this was a new model for Oxbridge colleges: previously colleges were endowed by a rich patron – monarch, noble, church – but this was Victorian crowdfunding in action. And it was a model which possibly influenced the fundraising models for the new universities and colleges which followed soon after. For example, Bangor, the public subscription for which raised £12,000.

    The college opened to new students in 1870. It hasn’t been without its critics – St John’s students formed a society to dismantle Keble which has, to date, been ineffective in its aims. Its distinctive buildings have been the source of much comment. They’ve been called “a dinosaur in a fair isle sweater” (which, to be fair, is a sight most of us would pay to see.) Apocryphally, a French visitor is reputed to have said “C’est magnifique, mais ce n’est pas la gare?” (I think the station is in fact about half a league away).

    The college really did seek to make life economical for its students. Its buildings contained student rooms on corridors rather than via staircases, which was, apparently, a saving. I guess staircases contained suites rather than single rooms? I am honestly not sure what to make of this claim. I’ve also seen it claimed that the corridors made it easier for visitors to be supervised, which seems more plausible.

    Another saving came in 1871, when Keble issued its own stamps, allowing students to send mail – only within Oxford, presumably – via the college porters. This was copied at a few other Oxford and Cambridge colleges, until in 1885 the Post Office decided that this infringed on its monopoly and insisted that the service cease.

    Keble is now one of the larger of the Oxford colleges, with about 1000 students all told. Famous alumni include Ed Balls, the former Chancellor of the Exchequer and celebrity self-searcher on Twitter. Another is Imran Khan, who has been both a wonderful cricketer and a Prime Minister of Pakistan. Howzat for a career?

    Here’s a jigsaw of the card. It was posted on 29 September 1914 to a Mrs Wood in Southampton.

    As best as I can tell, the card reads:

    Dear M + F [Mother and Father?], Arrived quite safe at Oxford. I am enjoying our long [????]. We proceed to Basingstoke [?] tomorrow. Will write a letter as soon as we reach Portsmouth. Will

    And in inserts “This is the College where we are staying (what)” and “We don’t remember the old place”.

    I am tempted to think that the card was sent as Will Wood stayed overnight as part of a military detachment on their way to Portsmouth for the continent, but I haven’t got anything other than the date of the card and one reading of its content to back that up. “C” Company of the No. 4 Officer Cadet Battalion was hosted at Keble College during the First World War, but the college’s archives hold no records of this before 1916. So I suspect speculation is all we have here.

    Source link

  • ED Reaches Consensus On Loan Caps

    ED Reaches Consensus On Loan Caps

    Pete Kiehart/The Washington Post/Getty Images

    A very limited number of degree programs would have access to the highest level of loans under a new set of regulations that the Department of Education and its negotiating committee signed off on Thursday.

    The regulations, written in response to the loan caps of Congress’s One Big Beautiful Bill Act, allow students in programs that qualify as professional to take out up to $200,000. Meanwhile, graduate students will only be able to take out up to $100,000.

    What was up for debate throughout the two-week negotiation process was which degree programs qualify for which level of loans.

    And while Thursday’s definition of professional programs was slightly more inclusive than the department’s original suggestion—a list of 10 degrees, including medicine, law, dentistry and a masters of divinity—they are not as expansive as a third proposal put forward by Alex Holt, the committee member representing taxpayers and public interest.

    The final definition limits professional programs to the original 10 programs, a doctorate in clinical psychology, and a handful of other doctorate programs that fall within the same four-digit CIP codes. By comparison, Holt’s plan would have included any program that is 80 credit hours long, regardless of whether it was a master’s or doctorate degree, so long as it fell within the same two-digit CIP code. (A CIP code, otherwise known as the Classification of Instructional Programs, is part of an organizational system used by ED to group similar academic programs.)

    On Thursday, before the committee’s final consensus vote, department officials explained to committee members that if they did not agree to their definition of a professional degree, they could lose out on other “concessions” they had won from the department. Without consensus, the department would legally be free to rewrite any aspect of the proposal prior to releasing it for public comment. (The proposal that reached consensus will still be subject to public comment.)

    “I also would like to remind everyone of numerous things that we have chosen to do in these negotiations that you requested for us to do,” said Tamy Abernathy, the department’s negotiator, before listing a slew of other changes the department made concerning the transition to new loan repayment plans and how to grandfather in existing borrowers to new loan policies.

    Under Secretary Nicolas Kent noted before the vote that the proposal was “not a perfect definition, but … a perfect definition for the purposes of consensus.”

    “We recognize that not every stakeholder group will be thrilled about our proposal,” Kent said. “But I want to remind everybody what consensus means, and that means that if you all agree, or can live with it—because we don’t have to love it—that we will take that regulatory language and put it into the notice of proposal.”

    Multiple committee members told Inside Higher Ed they agreed with Kent’s evaluation of what it took to reach a compromise.

    Kent closed the meeting by noting that “because we’ve reached consensus, negotiators and their employers will refrain from commenting negatively … as they agreed to do.”

    Source link

  • Podcast: Banned algorithms, Schools curriculum, Wales student finance

    Podcast: Banned algorithms, Schools curriculum, Wales student finance

    This week on the podcast we examine the Office for Students’ (OfS) renewed scrutiny of degree classification algorithms and what it means for confidence in standards.

    We explore the balance between institutional autonomy, transparency for students and employers, and the evidence regulators will expect.

    Plus we discuss the government’s response to the Francis review of curriculum and assessment in England, and the Welsh government’s plan to lift the undergraduate fee cap in 2026–27 to align with England with a 2 per cent uplift to student support.

    With Alex Stanley, Vice President for Higher Education of the National Union of Students, Michelle Morgan, Dean of Students at the University of East London, David Kernohan, Deputy Editor at Wonkhe and presented by Mark Leach, Editor-in-Chief at Wonkhe.

    Algorithms aren’t the problem. It’s the classification system they support

    The Office for Students steps on to shaky ground in an attempt to regulate academic standards

    Universities in England can’t ignore the curriculum (and students) that are coming

    Diamond’s a distant memory as Wales plays inflation games with fees and maintenance

    What we still need to talk about when it comes to the LLE

    You can subscribe to the podcast on Apple Podcasts, YouTube Music, Spotify, Acast, Amazon Music, Deezer, RadioPublic, Podchaser, Castbox, Player FM, Stitcher, TuneIn, Luminary or via your favourite app with the RSS feed.

    Source link

  • VICTORY! Federal district court dismisses class-action suit against pollster J. Ann Selzer

    VICTORY! Federal district court dismisses class-action suit against pollster J. Ann Selzer

    DES MOINES, Iowa, Nov. 6, 2025 — A federal district court today dismissed with prejudice a lawsuit against renowned Iowa pollster J. Ann Selzer, holding that the First Amendment bars the claims against her related to her October 2024 general election poll. As the court explained, “there is no free pass around the First Amendment.”

    FIRE’s defense of pollster J. Ann Selzer against Donald Trump’s lawsuit is First Amendment 101

    A polling miss isn’t ‘consumer fraud’ or ‘election interference’ — it’s just a prediction and is protected by the First Amendment.


    Read More

    The lawsuit, brought by a subscriber to The Des Moines Register and styled as a class action, stemmed from a poll Selzer published before the 2024 presidential election that predicted Vice President Kamala Harris leading by three points in Iowa. The suit asserted claims, including under Iowa’s Consumer Fraud Act, alleging that Selzer’s poll, which missed the final result by a wide margin, constituted “fake news” and “fraud.”

    Selzer, represented pro bono by FIRE, pushed back. FIRE explained that commentary about a political election is core protected speech. “Fake news” is a political buzzword, not a legal cause of action. And “fraud” is a defined legal concept: intentionally lying to convince someone to part with something of value. 

    The court explained, “polls are a mere snapshot of a dynamic and changing electorate” and “the results of an opinion poll are not an actionable false representation merely because the anticipated results differ from what eventually occurred.” As the Supreme Court has said, a party cannot evade First Amendment scrutiny by “simply labeling an action one for fraud.”

    The court held the plaintiff had “no factual allegations” to support his fraud claim, instead “invok[ing] mere buzzwords and speculation” to support his claims. And not only did the court find the First Amendment barred the claims, it similarly held each claim defective under Iowa law even without the First Amendment’s protection.

    Selzer is pleased with the result:

    I am pleased to see this lawsuit has been dismissed. The First Amendment’s protection for free speech and a free press held strong. I know that I did nothing wrong and I am glad the court also concluded that there was never a valid legal claim.

    FIRE’s Chief Counsel Bob Corn-Revere, who led Selzer’s defense, responded to the ruling: 

    This decision shows where petty politics ends and the rule of law begins. The court’s strongly worded opinion confirms that a legal claim cannot be concocted with political slogans and partisan hyperbole, and that there is no hiding from the First Amendment. This is a good day for freedom of speech.

    This lawsuit was a copycat of a still-pending suit filed by President Donald Trump against Selzer in December 2024 in which FIRE also represents her. FIRE Supervising Senior Attorney Conor Fitzpatrick remarked, “President Trump’s suit makes the same frivolous arguments against the same defendants. We are confident it will meet the same fate.”


    The Foundation for Individual Rights and Expression (FIRE) is a nonpartisan, nonprofit organization dedicated to defending and sustaining the individual rights of all Americans to free speech and free thought — the most essential qualities of liberty. FIRE recognizes that colleges and universities play a vital role in preserving free thought within a free society. To this end, we place a special emphasis on defending the individual rights of students and faculty members on our nation’s campuses, including freedom of speech, freedom of association, due process, legal equality, religious liberty, and sanctity of conscience.

    CONTACT:
    Karl de Vries, director of media relations, FIRE: 215-717-3473; [email protected]

    Source link

  • The Office for Students steps on to shaky ground in an attempt to regulate academic standards

    The Office for Students steps on to shaky ground in an attempt to regulate academic standards

    The funny thing about the story about today’s intervention by the Office for Students is that it is not really about grade inflation, or degree algorithms.

    I mean, it is on one level: we get three investigation reports on providers related to registration condition B4, and an accompanying “lessons learned” report that focuses on degree algorithms.

    But the central question is about academic standards – how they are upheld, and what role an arm of the government has in upholding them.

    And it is about whether OfS has the ability to state that three providers are at “increased risk” of breaching a condition of registration on the scant evidence of grade inflation presented.

    And it is certainly about whether OfS is actually able to dictate (or even strongly hint at its revealed preferences on) the way degrees are awarded at individual providers, or the way academic standards are upheld.

    If you are looking for the rule book

    Paragraph 335N(b) of the OfS Regulatory Framework is the sum total of the advice it has offered before today to the sector on degree algorithms.

    The design of the calculations that take in a collection of module marks (each assessed carefully against criteria set out in the module handbook, and cross-checked against the understanding of what expectations of students should be offered by an academic from another university) into an award of a degree at a given classification is a potential area of concern:

    where a provider has changed its degree classification algorithm, or other aspects of its academic regulations, such that students are likely to receive a higher classification than previous students without an increase in their level of achievement.

    These circumstances could potentially be a breach of condition of registration B4, which relates to “Assessment and Awards” – specifically condition B4.2(c), which requires that:

    academic regulations are designed to ensure that relevant awards are credible;

    Or B4.2(e), which requires that:

    relevant awards granted to students are credible at the point of being granted and when compared to those granted previously

    The current version of condition B4 came into force in May 2022.

    In the mighty list of things that OfS needs to have regard to that we know and love (section 2 of the 2017 Higher Education and Research Act), we learn that OfS has to pay mind to “the need to protect the institutional autonomy of English higher education providers” – and, in the way it regulates that it should be:

    Transparent, accountable, proportionate, and consistent and […] targeted only at cases where action is needed

    Mutant algorithms

    With all this in mind, we look at the way the regulator has acted on this latest intervention on grade inflation.

    Historically the approach has been one of assessing “unexplained” (even once, horrifyingly, “unwarranted”) good honours (1 or 2:1) degrees. There’s much more elsewhere on Wonkhe, but in essence OfS came up with its own algorithm – taking into account the degrees awarded in 2010-11 and the varying proportions students in given subject areas, with given A levels and of a given age – that starts from the position that non-traditional students shouldn’t be getting as many good grades as their (three good A level straight from school) peers, and if they did then this was potentially evidence of a problem.

    To quote from annex B (“statistical modelling”) of last year’s release:

    “We interact subject of study, entry qualifications and age with year of graduation to account for changes in awarding […] our model allows us to statistically predict the proportion of graduates awarded a first or an upper second class degree, or a first class degree, accounting for the effects of these explanatory variables.

    When I wrote this up last year I did a plot of the impact each of these variables is expected to have on – the fixed effect coefficient estimates show the increase (or decrease) in the likelihood of a person getting a first or upper second class degree.

    [Full screen]

    One is tempted to wonder whether the bit of OfS that deals with this issue ever speaks to the bit that is determined to drive out awarding gaps based on socio-economic background (which, as we know, very closely correlates with A level results). This is certainly one way of explaining why – if you look at the raw numbers – the people who award more first class and 2:1 degrees are the Russell Group, and at small selective specialist providers.

    [Full screen]

    Based on this model (which for 2023-24 failed to accurately predict fully fifty per cent of the grades awarded) OfS selected – back in 2022(!) – three providers where it felt that the “unexplained” awards had risen surprisingly quickly over a single year.

    What OfS found (and didn’t find)

    Teesside University was not found to have ever been in breach of condition B4 – OfS was unable to identify statistically significant differences in the proportion of “good” honours awarded to a single cohort of students if it applied each of the three algorithms Teesside has used over the past decade or so. There has been – we can unequivocally say – no evidence of artificial grade inflation at Teesside University.

    St Mary’s University, Twickenham and the University of West London were found to have historically been in breach of condition B4. The St Mary’s issue related to an approach that was introduced in 2016-17 and was replaced in 2021-22, in West London the offending practice was introduced in 2015-16 and replaced in 2021-22. In both cases, the replacement was made because of an identified risk of grade inflation. And for each provider a small number of students may have had their final award calculated using the old approach since 2021-22, based on a need to not arbitrarily change an approach that students had already been told about.

    To be clear – there is no evidence that either university has breached condition B4 (not least because condition B4 came into force after the offending algorithms had been replaced). In each instance the provider in question has made changes based on the evidence it has seen that an aspect of the algorithm is not having the desired effect, exactly the way in which assurance processes should (and generally do) work.

    Despite none of the providers in question currently being in breach of B4 all three are now judged to be at an increased risk of breaching condition B4.

    No evidence has been provided as to why these three particular institutions are at an “increased risk” of a breach while others who may use substantially identical approaches to calculating final degree awards (but have not been lucky enough to undergo an OfS inspection on grade inflation) are not. Each is required to conduct a “calibration exercise” – basically a review of their approach to awarding undergraduate degrees of the sort each has already completed (and made changes based on) in recent years.

    Vibes-based regulation

    Alongside these three combined investigation/regulatory decision publications comes a report on Batchelors’ degree classification algorithms. This purports to set out the “lessons learned” from the three reports, but it actually sets up what amounts to a revision to condition B4.

    We recognise that we have not previously published our views relating to the use of algorithms in the awarding of degrees. We look forward to positive engagement with the sector about the contents of this report. Once the providers we have investigated have completed the actions they have agreed to undertake, we may update it to reflect the findings from those exercises.

    The important word here is “views”. OfS expresses some views on the design of degree algorithms, but it is not the first to do so and there are other equally valid views held by professional bodies, providers, and others – there is a live debate and a substantial academic literature on the topic. Academia is the natural home of this kind of exchange of views, and in the crucible of scholarly debate evidence and logical consistency are winning moves. Having looked at every algorithm he could find, Jim Dickinson covers the debates over algorithm characteristics elsewhere on the site.

    It does feel like these might be views expressed ahead of a change to condition B4 – something that OfS does have the power to do, but would most likely (in terms of good regulatory practice, and the sensitive nature of work related to academic standards managed elsewhere in the UK by providers themselves) be subject to a full consultation. OfS is suggesting that it is likely to find certain practices incompatible with the current B4 requirements – something which amounts to a de facto change in the rules even if it has been done under the guise of guidance.

    Providers are reminded that (as they are already expected to do) they must monitor the accuracy and reliability of current and future degree algorithms – and there is a new reportable event: providers need to tell OfS if they change their algorithm that may result in an increase of “good” honours degrees awarded.

    And – this is the kicker – when they do make these changes, the external calibration they do cannot relate to external examiner judgements. The belief here is that external examiners only ever work at a module level, and don’t have a view over an entire course.

    There is even a caveat – a provider might ask a current or former external examiner to take an external look at their algorithm in a calibration exercise, but the provider shouldn’t rely solely on their views as a “fresh perspective” is needed. This reads back to that rather confusing section of the recent white paper about “assessing the merits of the sector continuing to use the external examiner system” while apparently ignoring the bit around “building the evidence base” and “seeking employers views”.

    Academic judgement

    Historically, all this has been a matter for the sector – academic standards in the UK’s world-leading higher education sector have been set and maintained by academics. As long ago as 2019 the UK Standing Committee for Quality Assessment (now known as the Quality Council for UK Higher Education) published a Statement of Intent on fairness in degree classification.

    It is short, clear and to the point: as was then the fashion in quality assurance circles. Right now we are concerned with paragraph b, which commits providers to protecting the value of their degrees by:

    reviewing and explaining how their process for calculating final classifications, fully reflect student attainment against learning criteria, protect the integrity of classification boundary conventions, and maintain comparability of qualifications in the sector and over time

    That’s pretty uncontroversial, as is the recommended implementation pathway in England: a published “degree outcomes statement” articulating the results of an internal institutional review.

    The idea was that these statements would show the kind of quantitative trends that OfS get interested in, some assurance that these institutional assessment processes meet the reference points, and reflect the expertise and experience of external examiners, and provide a clear and publicly accessible rationale for the degree algorithm. As Jim sets out elsewhere, in the main this has happened – though it hasn’t been an unqualified success.

    To be continued

    The release of this documentation prompts a number of questions, both on the specifics of what is being done and more widely on the way in which this approach does (or does not) constitute good regulatory practice.

    It is fair to ask, for instance, whether OfS has the power to decide that it has concerns about particular degree awarding practices, even where it is unable to point to evidence that these practices are currently having a significant impact on degrees awarded, and to promote a de facto change in interpretation of regulation that will discourage their use.

    Likewise, it seems problematic that OfS believes it has the power to declare that the three providers it investigated are at risk of breaching a condition of registration because they have an approach to awarding degrees that it has decided that it doesn’t like.

    It is concerning that these three providers have been announced as being at higher risk of a breach when other providers with similar practices have not. It is worth asking whether this outcome meets the criteria for transparent, accountable, proportionate, and consistent regulatory practice – and whether it represents action being targeted only at cases where it is demonstrably needed.

    More widely, the power to determine or limit the role and purpose of external examiners in upholding academic standards has not historically been one held by a regulator acting on behalf of the government. The external examiner system is a “sector recognised standard” (in the traditional sense) and generally commands the confidence of registered higher education providers. And it is clearly a matter of institutional autonomy – remember in HERA OfS needs to “have regard to” institutional autonomy over assessment, and it is difficult to square this intervention with that duty.

    And there is the worry about the value and impact of sector consultation – an issue picked up in the Industry and Regulators Committee review of OfS. Should a regulator really be initiating a “dialogue with the sector” when its preferences on the external examiner system are already so clearly stated? And it isn’t just the sector – a consultation needs to ensure that the the views of employers (and other stakeholders, including professional bodies) are reflected in whatever becomes the final decision.

    Much of this may become clear over time – there is surely more to follow in the wider overhaul of assurance, quality, and standards regulation that was heralded in the post-16 white paper. A full consultation will help centre the views of employers, course leaders, graduates, and professional bodies – and the parallel work on bringing the OfS quality functions back into alignment with international standards will clearly also have an impact.

    Source link

  • Algorithms aren’t the problem. It’s the classification system they support

    Algorithms aren’t the problem. It’s the classification system they support

    The Office for Students (OfS) has published its annual analysis of sector-level degree classifications over time, and alongside it a report on Bachelors’ degree classification algorithms.

    The former is of the style (and with the faults) we’ve seen before. The latter is the controversial bit, both to the extent to which parts of it represent a “new” set of regulatory requirements, and a “new” set of rules over what universities can and can’t do when calculating degree results.

    Elsewhere on the site my colleague David Kernohan tackles the regulation issue – the upshots of the “guidance” on the algorithms, including what it will expect universities to do both to algorithms in use now, and if a provider ever decides to revise them.

    Here I’m looking in detail at its judgements over two practices. Universities are, to all intents and purposes, being banned from any system which discounts credits with the lowest marks – a practice which the regulator says makes it difficult to demonstrate that awards reflect achievement.

    It’s also ruling out “best of” algorithm approaches – any universities that determine degree class by running multiple algorithms and selecting the one that gives the highest result will also have to cease doing so. Anyone still using these approaches by 31 July 2026 has to report itself to OfS.

    Powers and process do matter, as do questions as to whether this is new regulation, or merely a practical interpretation of existing rules. But here I’m concerned with the principle. Has OfS got a point? Do systems such as those described above amount to misleading people who look at degree results over what a student has achieved?

    More, not less

    A few months ago now on Radio 4’s More or Less, I was asked how Covid had impacted university students’ attainment. On a show driven by data, I was wary about admitting that as a whole, I think it would be fair to say that UK HE isn’t really sure.

    When in-person everything was cancelled back in 2020, universities scrambled to implement “no detriment” policies that promised students wouldn’t be disadvantaged by the disruption.

    Those policies took various forms – some guaranteed that classifications couldn’t fall below students’ pre-pandemic trajectory, others allowed students to select their best marks, and some excluded affected modules entirely.

    By 2021, more than a third of graduates were receiving first-class honours, compared to around 16 per cent a decade earlier – with ministers and OfS on the march over the risk of “baking in” the grade inflation.

    I found that pressure troubling at the time. It seemed to me that for a variety of reasons, providers may have, as a result of the pandemic, been confronting a range of faults with degree algorithms – for the students, courses and providers that we have now, it was the old algorithms that were the problem.

    But the other interesting thing for me was what those “safety net” policies revealed about the astonishing diversity of practice across the sector when it comes to working out the degree classification.

    For all of the comparison work done – including, in England, official metrics on the Access and Participation Dashboard over disparities in “good honours” awarding – I was wary about admitting to Radio 4’s listeners that it’s not just differences in teaching, assessment and curriculum that can drive someone getting a First here and a 2:2 up the road.

    When in-person teaching returned in 2022 and 2023, the question became what “returning to normal” actually meant. Many – under regulatory pressure not to “bake in” grade inflation – removed explicit no-detriment policies, and the proportion of firsts and upper seconds did ease slightly.

    But in many providers, many of the flexibilities introduced during Covid – around best-mark selection, module exclusions and borderline consideration – had made explicit and legitimate what was already implicit in many institutional frameworks. And many were kept.

    Now, in England, OfS is to all intents and purposes banning a couple of the key approaches that were deployed during Covid. For a sector that prizes its autonomy above almost everything else, that’ll trigger alarm.

    But a wider look at how universities actually calculate degree classifications reveals something – the current system embodies fundamentally different philosophies about what a degree represents, are philosophies that produce systematically different outcomes for identical student performance, and are philosophies that should not be written off lightly.

    What we found

    Building on David Allen’s exercise seven years ago, a couple of weeks ago I examined the publicly available degree classification regulations for more than 150 UK universities, trawling through academic handbooks, quality assurance documents and regulatory frameworks.

    The shock for the Radio 4 listener on the Clapham Omnibus would be that there is no standardised national system with minor variations, but there is a patchwork of fundamentally different approaches to calculating the same qualification.

    Almost every university claims to use the same framework for UG quals – the Quality Assurance Agency benchmarks, the Framework for Higher Education Qualifications and standard grade boundaries of 70 for a first, 60 for a 2:1, 50 for a 2:2 and 40 for a third. But underneath what looks like consistency there’s extraordinary diversity in how marks are then combined into final classifications.

    The variations cluster around a major divide. Some universities – predominantly but not exclusively in the Russell Group – operate on the principle that a degree classification should reflect the totality of your assessed work at higher levels. Every module (at least at Level 5 and 6) counts, every mark matters, and your classification is the weighted average of everything you did.

    Other universities – predominantly post-1992 institutions but with significant exceptions – take a different view. They appear to argue that a degree classification should represent your actual capability, demonstrated through your best work.

    Students encounter setbacks, personal difficulties and topics that don’t suit their strengths. Assessment should be about demonstrating competence, not punishing every misstep along a three-year journey.

    Neither philosophy is obviously wrong. The first prioritises consistency and comprehensiveness. The second prioritises fairness and recognition that learning isn’t linear. But they produce systematically different outcomes, and the current system does allow both to operate under the guise of a unified national framework.

    Five features that create flexibility

    Five structural features appear repeatedly across university algorithms, each pushing outcomes in one direction.

    1. Best-credit selection

    This first one has become widespread, particularly outside the Russell Group. Rather than using all module marks, many universities allow students to drop their worst performances.

    One uses the best 105 credits out of 120 at each of Levels 5 and 6. Another discards the lowest 20 credits automatically. A third takes only the best 90 credits at each level. Several others use the best 100 credits at each stage.

    The rationale is obvious – why should one difficult module or one difficult semester define an entire degree?

    But the consequence is equally obvious. A student who scores 75-75-75-75-55-55 across six modules averages 68.3 per cent. At universities where everything counts, that’s a 2:1. At universities using best-credit selection that drops the two 55s, it averages 75 – a clear first.

    Best-credit selection is the majority position among post-92s, but virtually absent at Russell Group universities. OfS is now pretty much banning this practice.

    The case against rests on B4.2(c) (academic regulations must be “designed to ensure” awards are credible) and B4.4(e) (credible means awards “reflect students’ knowledge and skills”). Discounting credits with lowest marks “excludes part of a student’s assessed achievement” and so:

    …may result in a student receiving a class of degree that overlooks material evidence of their performance against the full learning outcomes for the course.

    2. Multiple calculation routes

    These take that principle further. Several universities calculate your degree multiple ways and award whichever result is better. One runs two complete calculations – using only your best 100 credits at Level 6, or taking your best 100 at both levels with 20:80 weighting. You get whichever is higher.

    Another offers three complete routes – unweighted mean, weighted mean and a profile-based method. Students receive the highest classification any method produces.

    For those holding onto their “standards”, this sort of thing is mathematically guaranteed to inflate outcomes. You’re measuring the best possible interpretation of what students achieved, not what they achieved every time. As a result, comparison across institutions becomes meaningless. Again, this is now pretty much being banned.

    This time, the case against is that:

    …the classification awarded should not simply be the most favourable result, but the result that most accurately reflects the student’s level of achievement against the learning outcomes.

    3. Borderline uplift rules

    What happens on the cusps? Borderline uplift rules create all sorts of discretion around the theoretical boundaries.

    One university automatically uplifts students to the higher class if two-thirds of their final-stage credits fall within that band, even if their overall average sits below the threshold. Another operates a 0.5 percentage point automatic uplift zone. Several maintain 2.0 percentage point consideration zones where students can be promoted if profile criteria are met.

    If 10 per cent of students cluster around borderlines and half are uplifted, that’s a five per cent boost to top grades at each boundary – the cumulative effect is substantial.

    One small and specialist plays the counterfactual – when it gained degree-awarding powers, it explicitly removed all discretionary borderline uplift. The boundaries are fixed – and it argues this is more honest than trying to maintain discretion that inevitably becomes inconsistent.

    OfS could argue borderline uplift breaches B4.2(b)’s requirement that assessments be “reliable” – defined as requiring “consistency as between students.”

    When two students with 69.4% overall averages receive different classifications (one uplifted to First, one remaining 2:1) based on mark distribution patterns or examination board discretion, the system produces inconsistent outcomes for identical demonstrated performance.

    But OfS avoids this argument, likely because it would directly challenge decades of established discretion on borderlines – a core feature of the existing system. Eliminating all discretion would conflict with professional academic judgment practices that the sector considers fundamental, and OfS has chosen not to pick that fight.

    4. Exit acceleration

    Heavy final-year weighting amplifies improvement while minimising early difficulties. Where deployed, the near-universal pattern is now 25 to 30 per cent for Level 5 and 70 to 75 per cent for Level 6. Some institutions weight even more heavily, with year three counting for 60 per cent of the final mark.

    A student who averages 55 in year two and 72 in year three gets 67.2 overall with typical 30:70 weighting – a 2:1. A student who averages 72 in year two and 55 in year three gets 59.9 – just short of a 2:1.

    The magnitude of change is identical – it’s just that the direction differs. The system structurally rewards late bloomers and penalises any early starters who plateau.

    OfS could argue that 75 per cent final-year weighting breaches B4.2(a)’s requirement for “appropriately comprehensive” assessment. B4 Guidance 335M warns that assessment “focusing only on material taught at the end of a long course… is unlikely to provide a valid assessment of that course,” and heavy (though not exclusive) final-year emphasis arguably extends this principle – if the course’s subject matter is taught across three years, does minimizing assessment of two-thirds of that teaching constitute comprehensive evaluation?

    But OfS doesn’t make this argument either, likely because year weighting is explicit in published regulations, often driven by PSRB requirements, and represents settled institutional choices rather than recent innovations. Challenging it would mean questioning established pedagogical frameworks rather than targeting post-hoc changes that might mask grade inflation.

    5. First-year exclusion

    Finally, with a handful of institutional and PSRB exceptions, the first-year-not-counting is now pretty much universal, removing what used to be the bottom tail of performance distributions.

    While this is now so standard it seems natural, it represents a significant structural change from 20 to 30 years ago. You can score 40s across the board in first year and still graduate with a first if you score 70-plus in years two and three.

    Combine it with other features, and the interaction effects compound. At universities using best 105 credits at each of Levels 5 and 6 with 30:70 weighting, only 210 of 360 total credits – 58 per cent – actually contribute to your classification. And so on.

    OfS could argue first-year exclusion breaches comprehensiveness requirements – when combined with best-credit selection, only 210 of 360 total credits (58%) might count toward classification. But OfS explicitly notes this practice is now “pretty much universal” with only “a handful of institutional and PSRB exceptions,” treating it as neutral accepted practice rather than a compliance concern.

    Targeting something this deeply embedded across the sector would face overwhelming institutional autonomy defenses and would effectively require the sector to reinstate a practice it collectively abandoned over the past two decades.

    OfS’ strategy is to focus regulatory pressure on recent adoptions of “inherently inflationary” practices rather than challenging longstanding sector-wide norms.

    Institution type

    Russell Group universities generally operate on the totality-of-work philosophy. Research-intensives typically employ single calculation methods, count all credits and maintain narrow borderline zones.

    But there are exceptions. One I’ve seen has automatic borderline uplift that’s more generous than many post-92s. Another’s 2.0 percentage point borderline zone adds substantial flexibility. If anything, the pattern isn’t uniformity of rigour – it’s uniformity of philosophy.

    One London university has a marks-counting scheme rather than a weighted average – what some would say is the most “rigorous” system in England. And two others – you can guess who – don’t fit this analysis at all, with subject-specific systems and no university-wide algorithms.

    Post-1992s systematically deploy multiple flexibility features. Best-credit selection appears at roughly 70 per cent of post-92s. Multiple calculation routes appear at around 40 per cent of post-92s versus virtually zero per cent at research-intensive institutions. Several post-92s have introduced new, more flexible classification algorithms in the past five years, while Russell Group frameworks have been substantially stable for a decade or more.

    This difference reflects real pressures. Post-92s face acute scrutiny on student outcomes from league tables, OfS monitoring and recruitment competition, and disproportionately serve students from disadvantaged backgrounds with lower prior attainment.

    From one perspective, flexibility is a cynical response to metrics pressure. From another, it’s recognition that their students face different challenges. Both perspectives contain truth.

    Meanwhile, Scottish universities present a different model entirely, using GPA-based calculations across SCQF Levels 9 and 10 within four-year degree structures.

    The Scottish system is more internally standardised than the English system, but the two are fundamentally incompatible. As OfS attempts to mandate English standardisation, Scottish universities will surely refuse, citing devolved education powers.

    London is a city with maximum algorithmic diversity within minimum geographic distance. Major London universities use radically different calculation systems despite competing for similar students. A student with identical marks might receive a 2:1 at one, a first at another and a first with higher average at a third, purely over algorithmic differences.

    What the algorithm can’t tell you

    The “five features” capture most of the systematic variation between institutional algorithms. But they’re not the whole story.

    First, they measure the mechanics of aggregation, not the standards of marking. A 65 per cent essay at one university may represent genuinely different work from a 65 per cent at another. External examining is meant to moderate this, but the system depends heavily on trust and professional judgment. Algorithmic variation compounds whatever underlying marking variation exists – but marking standards themselves remain largely opaque.

    Second, several important rules fall outside the five-feature framework but still create significant variation. Compensation and condonement rules – how universities handle failed modules – differ substantially. Some allow up to 30 credits of condoned failure while still classifying for honours. Others exclude students from honours classification with any substantial failure, regardless of their other marks.

    Compulsory module rules also cut across the best-credit philosophy. Many universities mandate that dissertations or major projects must count toward classification even if they’re not among a student’s best marks. Others allow them to be dropped. A student who performs poorly on their dissertation but excellently elsewhere will face radically different outcomes depending on these rules.

    In a world where huge numbers of students now have radically less module choice than they did just a few years ago as a result of cuts, they would have reason to feel doubly aggrieved if modules they never wanted to take in the first place will now count when they didn’t last week.

    Several universities use explicit credit-volume requirements at each classification threshold. A student might need not just a 60 per cent average for a 2:1, but also at least 180 credits at 60 per cent or above, including specific volumes from the final year. This builds dual criteria into the system – you need both the average and the profile. It’s philosophically distinct from borderline uplift, which operates after the primary calculation.

    And finally, treatment of reassessed work varies. Nearly all universities cap resit marks at the pass threshold, but some exclude capped marks from “best credit” calculations while others include them. For students who fail and recover, this determines whether they can still achieve high classifications or are effectively capped at lower bands regardless of their other performance.

    The point isn’t so much that I (or OfS) have missed the “real” drivers of variation – the five features genuinely are the major structural mechanisms. But the system’s complexity runs deeper than any five-point list can capture. When we layer compensation rules onto best-credit selection, compulsory modules onto multiple calculation routes, and volume requirements onto borderline uplift, the number of possible institutional configurations runs into the thousands.

    The transparency problem

    Every day’s a school day at Wonkhe, but what has been striking for me is quite how difficult the information has been to access and compare. Some institutions publish comprehensive regulations as dense PDF documents. Others use modular web-based regulations across multiple pages. Some bury details in programme specifications. Several have no easily locatable public explanation at all.

    UUK’s position on this, I’d suggest, is a something of a stretch:

    University policies are now much more transparent to students. Universities are explaining how they calculate the classification of awards, what the different degree classifications mean and how external examiners ensure consistency between institutions.

    Publication cycles vary unpredictably, cohort applicability is often ambiguous, and cross-referencing between regulations, programme specifications and external requirements adds layers upon layers of complexity. The result is that meaningful comparison is effectively impossible for anyone outside the quality assurance sector.

    This opacity matters because it masks that non-comparability problem. When an employer sees “2:1, BA in History” on a CV, they have no way of knowing whether this candidate’s university used all marks or selected the best 100 credits, whether multiple calculation routes were available or how heavily final-year work was weighted. The classification looks identical regardless. That makes it more, not less, likely that they’ll just go on prejudices and league tables – regardless of the TEF medal.

    We can estimate the impact conservatively. Year one exclusion removes perhaps 10 to 15 per cent of the performance distribution. Best-credit selection removes another five to 10 per cent. Heavy final-year weighting amplifies improvement trajectories. Multiple calculation routes guarantee some students shift up a boundary. Borderline rules uplift perhaps three to five per cent of the cohort at each threshold.

    Stack these together and you could shift perhaps 15 to 25 per cent of students up one classification band compared to a system that counted everything equally with single-method calculation and no borderline flexibility. Degree classifications are measuring as much about institutional algorithm choices as about student learning or teaching quality.

    Yes, but

    When universities defend these features, the justifications are individually compelling. Best-credit selection rewards students’ strongest work rather than penalising every difficult moment. Multiple routes remove arbitrary disadvantage. Borderline uplift reflects that the difference between 69.4 and 69.6 per cent is statistically meaningless. Final-year emphasis recognises that learning develops over time. First-year exclusion creates space for genuine learning without constant pressure.

    None of these arguments is obviously wrong. Each reflects defensible beliefs about what education is for. The problem is that they’re not universal beliefs, and the current system allows multiple philosophies to coexist under a facade of equivalence.

    Post-92s add an equity dimension – their flexibility helps students from disadvantaged backgrounds who face greater obstacles. If standardisation forces them to adopt strict algorithms, degree outcomes will decline at institutions serving the most disadvantaged students. But did students really learn less, or attain to a “lower” standard?

    The counterargument is that if the algorithm itself makes classifications structurally easier to achieve, you haven’t promoted equity – you’ve devalued the qualification. And without the sort of smart, skills and competencies based transcripts that most of our pass/fail cousins across Europe adopt, UK students end up choosing between a rock and a hard place – if only they were conscious of that choice.

    The other thing that strikes me is that the arguments I made in December 2020 for “baking in” grade inflation haven’t gone away just because the pandemic has. If anything, the case for flexibility has strengthened as the cost of living crisis, inadequate maintenance support and deteriorating student mental health create circumstances that affect performance through no fault of students’ own.

    Students are working longer hours in paid employment to afford rent and food, living in unsuitable accommodation, caring for family members, and managing mental health conditions at record levels. The universities that retained pandemic-era flexibilities – best-credit selection, generous borderline rules, multiple calculation routes – aren’t being cynical about grade inflation. They’re recognising that their students disproportionately face these obstacles, and that a “totality-of-work” philosophy systematically penalises students for circumstances beyond their control rather than assessing what they’re actually capable of achieving.

    The philosophical question remains – should a degree classification reflect every difficult moment across three years, or should it represent genuine capability demonstrated when circumstances allow? Universities serving disadvantaged students have answered that question one way – research-intensive universities serving advantaged students have answered it another.

    OfS’s intervention threatens to impose the latter philosophy sector-wide, eliminating the flexibility that helps students from disadvantaged backgrounds show their “best selves” rather than punishing them for structural inequalities that affect their week-to-week performance.

    Now what

    As such, a regulator seeking to intervene faces an interesting challenge with no obviously good options – albeit one of its own making. Another approach might have been to cap the most egregious practices – prohibit triple-route calculations, limit best-credit selection to 90 per cent of total credits, cap borderline zones at 1.5 percentage points.

    That would eliminate the worst outliers while preserving meaningful autonomy. The sector would likely comply minimally while claiming victory, but oodles of variation would remain.

    A stricter approach would be mandating identical algorithms – but would provoke rebellion. Devolved nations would refuse, citing devolved powers and triggering a constitutional comparison. Research intensive universities would mount legal challenges on academic freedom grounds, if they’re not preparing to do so already. Post-92s would deploy equity arguments, claiming standardisation harms universities serving disadvantaged students.

    A politically savvy but inadequate approach might have been mandatory transparency rather than prescription. Requiring universities to publish algorithms in standardised format with some underpinning philosophy would help. That might preserve autonomy while creating a bit of accountability. Maybe competitive pressure and reputational risk will drive voluntary convergence.

    But universities will resist even being forced to quantify and publicise the effects of their grading systems. They’ll argue it undermines confidence and damages the UK’s international reputation.

    Given the diversity of courses, providers, students and PSRBs, algorithms also feel like a weird thing to standardise. I can make a much better case for a defined set of subject awards, a shared governance framework (including subject benchmark statements, related PSRBs and degree algorithms) than I can for tightening standardisation in isolation.

    The fundamental problem is that the UK degree classification system was designed for a different age, a different sector and a different set of students. It was probably a fiction to imagine that sorting everyone into First, 2:1, 2:2 and Third was possible even 40 years ago – but today, it’s such obvious nonsense that without richer transcripts, it just becomes another way to drag down the reputation of the sector and its students.

    Unfit for purpose

    In 2007, the Burgess Review – commissioned by Universities UK itself – recommended replacing honours degree classifications with detailed achievement transcripts.

    Burgess identified the exact problems we have today – considerable variation in institutional algorithms, the unreliability of classification as an indicator of achievement, and the fundamental inadequacy of trying to capture three years of diverse learning in a single grade.

    The sector chose not to implement Burgess’s recommendations, concerned that moving away from classifications would disadvantage UK graduates in labour markets “where the classification system is well understood.”

    Eighteen years later, the classification system is neither well understood nor meaningful. A 2:1 at one institution isn’t comparable to a 2:1 at another, but the system’s facade of equivalence persists.

    The sector chose legibility and inertia over accuracy and ended up with neither – sticking with a system that protected institutional diversity while robbing students of the ability to show off theirs. As we see over and over again, a failure to fix the roof when the sun was shining means reform may now arrive externally imposed.

    Now the regulator is knocking on the conformity door, there’s an easy response. OfS can’t take an annual pop at grade inflation if most of the sector abandons the outdated and inadequate degree classification system. Nothing in the rules seems to mandate it, some UG quals don’t use it (think regulated professional bachelors), and who knows where the White Paper’s demand for meaningful exit awards at Level 4 and 5 fit into all of this.

    Maybe we shouldn’t be surprised that a regulator that oversees a meaningless and opaque medal system with a complex algorithm that somehow boils an entire university down to “Bronze”, “Silver” Gold” or “Requires Improvement” is keen to keep hold of the equivalent for students.

    But killing off the dated relic would send a really powerful signal – that the sector is committed to developing the whole student, explaining their skills and attributes and what’s good about them – rather than pretending that the classification makes the holder of a 2:1 “better” than those with a Third, and “worse” than those with a First.

    Source link