Blog

  • Higher education postcard: vice chancellors and all that

    Higher education postcard: vice chancellors and all that

    Higher education is not like other sectors. One way in which this is apparent is the different terminology used for the leadership roles in universities and colleges. Let’s have a look.

    Firstly, who’s in charge? If you look at UK universities’ governance documents, often the highest office by precedent is the chancellor. The word chancellor, I find, comes from old French, and was the court usher who controlled access to the monarch or to judges, the term coming from cancellus, or lattice work, being the bars in the screen dividing the king/judge from hoi polloi.

    But chancellors in universities are nowadays honorary and ceremonial roles, although at first they probably did more day-to-day. For example, Oxford’s first chancellor may have been appointed in 1201, its first vice chancellor in 1230. There’s twenty-nine years of stuff to be done between those dates, and the chancellor must have been in the frame for doing some of that.

    In any event, chancellors are now ceremonial. But they have pro-chancellors and vice chancellors. Both terms mean, literally, on behalf of the chancellor, but from different Latin roots. Pro chancellor roles tend to be chair of the university’s governors, but do not play a part in the executive leadership of the university. Vice chancellor roles are – almost universally – the executive leader of the university.

    (Incidentally, vice in the meaning of “on behalf of” was one of the first university Latin words I learnt. In my first job, at Senate House, University of London, minutes would record, for example, “Dr Bloggs vice Dr Smith” – that is, Dr Smith had sent a substitute. I never learnt Latin at school, on account of attending the ur-bog-standard comprehensive in Sheffield, so I’ve picked it up from working in universities, where it remains the first language for some.)

    You’ll also meet pro in pro-vice chancellors; sadly it’s not in this sense the opposite of am, like in pro-am golf tournaments. Add in deputy and you can have deputy pro-vice chancellor, which is a little like assistant chief to the chief assistant.

    Moving back up the ladder, there’s a trend for some universities now to have a vice chancellor who is also president. This follows the US usage, where the head of institution is sometimes the chancellor, and sometimes the president. The issue, apparently, was that on some trips overseas, US hosts would wonder why they were only talking to an assistant, not to the CEO. So being vice chancellor and president is becoming more common, particularly but not exclusively, I think, in Russell Group universities.

    And in colleges you have many titles. As I noted, I started my career in higher education at the University of London, and there you have a provost, a rector, a master, a warden, several principals, several directors, and several presidents, sometimes just president and sometimes president and something else. All of this stems from the individual history of the colleges: for example, Goldsmiths’ warden title comes from the Worshipful Company of Goldsmiths, which supported its foundation back in the day.

    And I bet you think that I’ve forgotten Scotland, which, of course, has a different history, with four universities established before union with England in 1707. And in Scotland principal is the dominant title for the head of institution. Twelve have principals (and probably principles as well), five have vice chancellor and principals, and one has a director. Scotland also has several rectors, but these aren’t the head of institution like the rector of Imperial (or indeed of Sunderland Polytechnic as was) but are a ceremonial figurehead elected by students.

    Here’s a jigsaw of the postcard. It hasn’t been sent, so I can’t put a specific date on it.

    And is it an actual person? Eyeballing the potential candidates, I think it might be a portrait of Arthur James Mason, Professor of Divinity, Master of Pembroke College 1903–12, and Vice Chancellor of the University of Cambridge from 1908–10.

    Source link

  • What gets lost when the first draft is always polished?

    What gets lost when the first draft is always polished?

    Something has shifted.

    I recognised immediately that the email in my inbox had been written with the help of AI. The message was structured, measured, and neatly aligned with the conventions of academic communication.

    There was nothing inappropriate about it. If anything, it stood out for its clarity. My immediate response was not irritation but reflection.

    Something fundamental in the landscape of student communication had shifted, and the implications were not merely technological.

    What struck me most was not the technology itself, but what it revealed about students’ relationship to academic communication.

    For many, AI has become the safest place to begin shaping their academic voice – a space where they can get help without fear of sounding incompetent, impolite, or out of place.

    However, with AI becoming the first point of contact, it also significantly reshapes how students build confidence, seek support, and come to understand themselves as legitimate participants in academic life.

    Email writing has long played a low-key, unacknowledged role in higher education. It shapes the quality of interactions between students and staff, influences how confident students feel when seeking support, and often serves as a first step in their academic identity formation.

    Yet it has rarely been at the forefront of academic and professional skills requiring structured teaching. Instead, tone, clarity, and cultural norms tend to be acquired informally, often with uneven outcomes.

    The shift in student email practices, however, did not occur overnight. It emerged in distinct waves. The earliest emails may have missed certain academic conventions, but they reflected something important – the unfiltered, often anxious, and deeply human nature of student expression.

    Hello Dear Tutor, please can you help me with XYZ!!!”

    What followed was a period in which translation apps and early AI-adjacent tools began shaping student messages. These drafts were typically literal, tonally inconsistent, or overly formal – sometimes more cumbersome than the messages students hoped to refine.

    By 2024, a third wave appeared – emails that were grammatically precise but noticeably mechanical, with phrasing that felt oddly detached or mismatched to the emotional context.

    The most recent wave, however, is markedly different. Today’s AI-assisted emails display tone, register, and interpersonal warmth that align strikingly well with UK higher education communication norms.

    They are clearer, easier to respond to, and more polished than many human-written counterparts – yet their increasing fluency raises questions about what may slowly be lost when AI mediates the first layer of human communication.

    Over the course of a three-year structured personal tutor pilot for postgraduate taught students, I delivered lecture-style workshops on email etiquette and intercultural communication across three cohorts – sessions designed to help students navigate tone, clarity, structure, and cultural awareness in their correspondence.

    These workshops were well received and often revealed the specific areas where students felt uncertain. But over time, as AI-generated drafts became more polished, student feedback pointed to a shift – they were less concerned with the mechanics of email writing and more interested in developing the judgement required for sensitive or complex interactions.

    Students increasingly arrived with structurally sound drafts but lacked confidence in how to handle the interpersonal dimensions of communication. This change has influenced how I teach these sessions and reflects a wider sector trend – as AI takes on the mechanical aspects of communication, pedagogical attention is moving toward the interpretive and relational elements that technology cannot replicate.

    Unequal support

    Universities have never been wholly indifferent to the communication needs of students. The presence of English for Academic Purposes tutors, academic skills tutors, study skills teams, and personal tutoring structures demonstrates a longstanding recognition of the complexities students face, particularly in an international setting.

    These roles bring linguistic, cultural, and interpersonal expertise into academic environments in ways that strengthen student experience and foster meaningful connection.

    However, what has changed recently is where students turn first for support. A student unsure about how to phrase a sensitive request, or anxious about sounding impolite, once came to a tutor for reassurance. Increasingly, their first point of reference is an AI tool.

    Students describe this shift not as a shortcut but as a source of confidence – a way to express themselves without fearing misinterpretation, judgement, or linguistic missteps.

    For international students navigating unfamiliar communicative conventions, and for home students who have not been explicitly taught professional email writing, the appeal is obvious.

    AI offers a sense of linguistic security – an initial framework that enables communication to begin – echoing findings from Jisc and Advance HE surveys which show that students often feel uncertain about tone, clarity, and how their messages will be interpreted.

    As sector guidance increasingly frames AI literacy as part of students’ digital capabilities, there is a risk that communication is treated as a technical skill rather than a relational one – something that can be optimised rather than learned through reflection and support.

    What AI can’t do

    In many cases, AI-generated messages are easier for staff to interpret. They arrive with clearer structure, fewer ambiguities, and a tone that aligns more closely with academic expectations. This has practical benefits – misunderstandings reduce, responses are quicker, and students feel more able to reach out.

    Yet beneath these advantages lie deeper pedagogical questions. Before the widespread use of AI, a substantial part of my work as a personal tutor involved helping students formulate messages that required sensitivity, boundary-setting, or careful negotiation. These conversations were not specifically about vocabulary or grammar – they were about judgement – understanding what to disclose, how much to say, how to position oneself, and how to communicate with integrity in complex interpersonal contexts.

    Consider the example of a student struggling with group dynamics. An AI tool might produce a concise, polite email requesting guidance, but this does not equip the student to navigate the underlying challenge – describing the issue accurately, advocating for fairness, protecting relationships with peers, and anticipating the consequences of escalation.

    These are sophisticated communicative decisions that depend on confidence, self-awareness, and the ability to read social context. AI at this current stage cannot develop those capacities.

    This distinction is critical. Although AI can generate language, it cannot cultivate the reflective judgement students require to express themselves authentically and ethically, which is arguably imperative to the development of their critical thinking skills.

    Dependency or gap?

    Some worry that students may become overly reliant on AI or that their individuality may be flattened by formulaic phrasing. These concerns are not without merit, but they require nuance.

    Many students who lean heavily on AI do so because they were never formally taught the communicative norms of academia. Their “dependency” reflects systemic gaps in the provision of communication education more than any inherent shortcoming.

    Others fear that AI obscures intercultural difference. While AI can smooth the surface of communication, it does not – and cannot – replace the development of intercultural understanding. Students still need to learn how communication functions within and across cultural contexts, even if AI helps them begin those conversations with greater confidence.

    With appropriate guidance, students can learn not to accept AI-generated outputs uncritically but to review, adapt, and personalise them. The tool becomes a starting point in a process of learning and refinement rather than an end product in itself.

    A starting point

    The most constructive approach is to understand AI as one element within a broader ecology of communication support. Students may begin with an AI-generated draft, but it’s imperative to spotlight that human judgement should remain indispensable. Here, the educator’s role shifts toward helping students decide when AI-generated phrasing is appropriate, when it is insufficient, and when a situation requires a more nuanced or personalised approach.

    This does not diminish the importance of academic communication support. Rather, it redefines it. Workshops and tutorials that once focused on sentence-level clarity may now need to prioritise higher-order skills – articulating uncertainty, navigating conflict, expressing boundaries, and understanding the ethics of communication in digital environments.

    Small reflective exercises can deepen this learning, prompting students to consider why they phrased a request in a particular way, what outcomes they were hoping for, and how AI either supported or limited their intentions.

    Teaching judgement

    If we accept that AI will remain embedded in student communication practices, then support systems must evolve accordingly. This does not mean discouraging AI use. Rather, it means situating AI within a broader pedagogical framework that helps students engage with it critically and responsibly.

    Educators might place greater emphasis on discerning when AI-generated phrasing is contextually appropriate, understanding how tone functions in sensitive or emotionally charged interactions, recognising when AI output should be revised, supplemented, or entirely set aside, and developing a professional voice that reflects authenticity, clarity, and ethical judgement.

    These shifts align communication support more closely with the realities of academic and professional life, where clarity can be assisted by technology but meaning must still be crafted by the human mind.

    AI has undoubtedly made some aspects of student communication easier. It has reduced uncertainty, accelerated clarity, and enabled students to express themselves in ways that feel more confident and secure. Nevertheless, the deeper work of communication – the development of judgement, voice, and relational understanding – remains firmly within the human domain.

    The task for higher education is not to resist AI but to understand how it reshapes the conditions under which communicative skills are learned. Our responsibility, therefore, should be to ensure that students are able not only to send clear messages but to navigate the complexities of academic and interpersonal life with confidence and integrity that are valuable assets for their future both personally and professionally.

    The key issue, then, is not whether students use AI to write emails, but whether higher education continues to take responsibility for teaching the judgement that surrounds communication. As AI absorbs the mechanics of clarity and tone, educators must focus more intentionally on helping students navigate uncertainty, vulnerability, boundaries, and ethical self-expression.

    The development of judgement, voice, and interpersonal understanding remains firmly within the human domain – and at the heart of higher education’s purpose.

    AI may draft the email, but it cannot teach students who they are in the process of writing it.

    Source link

  • Universities should be positive disruptors on trans inclusion

    Universities should be positive disruptors on trans inclusion

    On 16 April 2025, the Supreme Court handed down its judgment in For Women Scotland Ltd v The Scottish Ministers, ruling that under the Equality Act 2010 a woman is defined by biological sex.

    This ruling has had a profound impact on organisations, with both Girlguiding and the Women’s Institute actioning policies that are possibly trans-exclusionary.

    While these organisations may not wish to be trans-exclusionary, they have felt pressured by law to enact some kind of policy that aligns with the current ruling.

    And while the Supreme Court ruling provides a definition of sex in law, it offers little guidance on how this should be operationalised within multi-layered institutional settings such as universities.

    Confusion reigns

    Several higher education institutions have already attempted to pass policy that aimed to be as inclusive as possible of transgender and gender non-conforming students.

    However, one example – the University of Leicester – has faced legal action because of its trans-inclusive policies.

    As an academic working in a university as both an associate lecturer and co-director of equality, diversity, and inclusion, I have observed universities’ initial responses to these policies.

    They largely consist of confusion in the first instance, followed by lengthy engagement in various working groups with two aims – first, to follow the new, somewhat unclear policy to the letter, and second, to try to maintain or create a trans-inclusive environment.

    This is difficult because the policy simultaneously requires single-sex spaces while stating that transgender discrimination is wrong. The emergence of multiple working groups, delayed policy decisions, and divergent institutional responses speaks to the uncertainty universities are now facing – placing higher education institutions in the difficult position of balancing a newly clarified legal definition with long-standing commitments to inclusion, equality, and student welfare.

    The confusion around this ruling is likely having an impact on professional relationships between transgender and cisgender members of university communities across the UK.

    Why relationships matter

    Forming professional relationships is key for members of universities – acting as a pathway to hearing diverse voices in the university community and implementing appropriate policy.

    As a psychologist specialising in relationships between dominant and marginalised groups, I am well-versed in the evidence base for why creating an inclusive environment with supportive relationships is important.

    Rather than assessing the legal merits of the decision, my focus is on how its interpretation is shaping institutional behaviour and action.

    One key part of enacting inclusive behaviour is through strengthening supportive relationships between members of the university community. Supportive relationships are key for enhancing health by reducing stress and improving coping mechanisms – to name one important mechanism highlighted by relationships researchers such as Julianne Holt-Lunstad.

    All people experience everyday stress one way or another, whether it’s the car breaking down, the bus being late, or power outages. However, transgender people – and indeed other marginalised groups – experience additional stressors through a process called minority stress.

    In its simplest terms, this can be defined as the extraneous additional stressors experienced by the marginalised, including transphobia, specific forms of discrimination, and intentional misgendering.

    Through forging reassuring and kind relationships with transgender people, and indeed marginalised groups in general, we can create an inclusive university environment that bolsters the health of its community.

    Positive disruptors

    This is where we as academics come in – as educators, researchers, and experts. We have the power as institutions to go against the grain and say “this is not right.”

    We can dive into the evidence base – we can see that transgender and gender non-conforming people have an estimated 45 per cent suicidal ideation rate, we can see that the majority of crime is not committed by transgender and gender non-conforming people, we can see that crime estimated to be enacted toward transgender people is at twice the rate of cisgender people, or plausibly four times due to hate crime being under-reported.

    As universities we can be – to borrow a term from Julie Hulme – positive disruptors. Encouraging and enabling transgender joy is essential to creating a positive experience at university which helps improve feelings of gender congruence. We can fight this misinformed policy through our actions, we can create spaces that are safe for transgender students and staff, we can be the bastion of inclusivity and uphold what universities truly are – safe spaces for complex debates and places of learning.

    Some principles

    Considering the lack of any guidance, I have included some here which are largely based on principle.

    The first step in being a positive disruptor is through including transgender and gender non-conforming people in the policymaking of universities from staff and student populations.

    Another recommendation is ensuring that transgender and gender non-conforming people are empowered in these spaces through any necessary champions or allies that can bolster their stance and ensure their voice is heard.

    Lastly, universities should strive to make their spaces democratic and transparent to their members from all walks of life.

    These are but a few recommendations, but as foundational principles and practices they can help disrupt the status quo in a positive direction for our university communities.

    Source link

  • Will the student protection stable door ever get closed?

    Will the student protection stable door ever get closed?

    The Office for Students (OfS) has published new polling on students’ perceptions of their providers’ response to financial challenges.

    I say new – it was actually carried out last April – but regardless of the delay, given the scale of job losses in the sector over the past year, we are very much in no shit Sherlock territory when it comes to the headlines.

    83 per cent of those polled thought that cost-cutting measures had changed the experience they felt they’d been promised – often through larger class sizes than expected, greater use of online learning, or reduced access to academic resources and student support.

    Around a quarter reported changes in support services, including services/funding offered via the SU, IT and technical support, or academic support services.

    Around two in five perceived impacts on access to academic resources and the quality of teaching, and a reduction in extracurricular activities, and 46 per cent of those polled expressed concern about the potential closure of their course or department – and nearly half were unaware of their options if that happened.

    An accompanying blog post remixes material from the new strategy, linking Ambitious, Vigilant, Collaborative and Vocal to the findings.

    Students might well ask whether “a little bit quicker than glacial” should be added to the list.

    They noticed

    Savanta conducted an online survey of 1,256 students studying at OfS-regulated universities and colleges in England, with fieldwork running 8-15 April 2025. The sample was designed with quotas on age, sex and ethnicity to match HESA data and the OfS Access and Participation dashboard, with all reported subgroup differences statistically significant at the 95 per cent confidence level and a margin of error of +/- 3 per cent.

    Just over half (52 per cent) of respondents reported noticing cost-cutting measures at their institution, with 56 per cent aware of perceived financial risks. Awareness was significantly stratified by student characteristics – older students (25+) were more likely than younger students to notice measures (58 per cent vs 48 per cent), as were postgraduates compared to undergraduates (60 per cent vs 44 per cent).

    Male students were also significantly more likely to be aware of financial risks than female students (67 per cent vs 47 per cent). Those already aware of financial risks were significantly more likely to notice measures than those not aware (62 per cent compared to 38 per cent).

    Students reported becoming aware of changes through multiple channels. Some received formal communications from their providers:

    The university management board sent out emails explaining various cost-cutting measures. (Undergraduate)

    Others learned through staff:

    My teachers told me about lay-offs. (Undergraduate)

    Many noticed changes through direct observation – reduced opening hours, cuts to society funding, larger classes:

    I noticed reduced hours of opening on the music studios and the library. (Undergraduate)

    I noticed the cuts in extracurricular because of my societies I was involved in, they got less funding from the university, so many activities that took place over the past few years are not happening anymore. (Postgraduate)

    For some, the changes were impossible to miss:

    It was obvious given that the previous master’s class had five students, and this one has over 50. Social media platforms also discussed it. (Postgraduate)

    Others pieced together the picture from multiple sources:

    I noticed some of these changes firsthand, such as the longer waiting times for counselling services and the reduced number of extracurricular activities. However, I was also informed about some of the cost-cutting measures through the university’s student union newsletter and social media channels, which reported on changes to IT support services and careers advising.” (Postgraduate)

    Impacts and differences

    Among students who noticed cost-cutting measures, the most commonly reported impacts related to staffing – 44 per cent observed changes to staff availability and capacity, while 40 per cent reported increased class sizes, with postgraduates significantly more likely to report the latter (44 per cent vs 35 per cent of undergraduates).

    There are fewer staff and bigger classroom sizes, and there are lots of cutbacks on IT equipment. (Postgraduate)

    Beyond the academic core, 38 per cent noticed changes to financial support availability, and 93 per cent of those noticing any measures highlighted changes to support services.

    Postgraduates were significantly more likely to notice changes to careers and employability services (24 per cent vs 15 per cent) and IT/technical support (29 per cent vs 20 per cent). Those aware of financial risks were more attuned to “indirect” measures such as changes to placements (28 per cent vs 15 per cent) and academic and pastoral support (24 per cent vs 14 per cent).

    The perceived impact was predominantly negative – 39 per cent cited a negative impact on academic resource quality, 35 per cent on extracurricular activities, and 34 per cent on teaching quality. The report notes that some of these findings were “difficult to interpret and/or implausible”, suggesting respondents may not always have been clear what the question was asking.

    A striking 83 per cent of students reported noticing a gap between the experience they believed had been promised at enrolment and the reality – with postgraduates feeling this more acutely (90 per cent vs 77 per cent of undergraduates).

    The report notes an important distinction between institutional promises and student expectations – though the question referred to institutional commitments, responses mixed unmet personal expectations with unfulfilled promises.

    Class sizes featured prominently:

    I was promised a class of 15 but now there are 25 students per class. (Undergraduate)

    Classes are much larger than expected and some courses and/or resources they promised were either cut or moved online due to budget cuts. (Undergraduate)

    The shift to online delivery was a recurring theme:

    When I enrolled, I was promised access to regular in-person lectures and state-of-the-art facilities. However, due to the budget cuts, many of my lectures were moved online. (Postgraduate)

    Many lectures were moved online permanently after the pandemic even though we were back on campus. (Postgraduate)

    Support and resources also fell short of expectations:

    The support I get as a student has reduced compared to what I was told I would get when I enrolled. (Undergraduate)

    They promised to provide us with all the learning equipment in school, but now I have to bring my own laptops. (Undergraduate)

    My university experience has differed quite a bit from what was initially promised, mainly because of some minor cost-cutting measures. When I enrolled, they promised modern facilities and extensive student support. There is limited access to academic support, and modern facilities like labs and student spaces have seen budget cuts. (Postgraduate)

    Financial pressures compounded the sense of broken promises:

    Tuition fees increased slightly, but alongside living expenses and unexpected costs, it’s a lot more than I anticipated when enrolling. (Undergraduate)

    Prices have gone up for food and accommodation, and unexpected fees for items like printing. (Undergraduate)

    The consequences for student decisions are notable – a quarter (25 per cent) said they were more likely to consider dropping out, 36 per cent were considering transferring, and 24 per cent considering deferring.

    Postgraduates were more likely to consider transferring than undergraduates (43 per cent vs 30 per cent), as were minority ethnic students compared to white students (41 per cent vs 33 per cent), and male students compared to female students (41 per cent vs 32 per cent).

    Awareness of financial risks significantly amplified these inclinations – 29 per cent of those aware were considering dropping out compared to 19 per cent of those unaware, and 43 per cent were considering transferring compared to 29 per cent.

    Preparedness and support

    Students were more likely to be unaware of what would happen should their course close than aware (48 per cent vs 44 per cent), with this lack of awareness particularly pronounced among undergraduates (53 per cent) compared to postgraduates (43 per cent), and among females (54 per cent vs 40 per cent of males).

    A majority (56 per cent) were unaware of their provider’s published student protection plans – rising to 68 per cent among undergraduates and 72 per cent among those not aware of financial risks. Males reported higher awareness of student protection plans than females (45 per cent vs 30 per cent).

    Concern about potential course or department closure sat at 46 per cent overall, with higher levels among postgraduates (59 per cent vs 34 per cent of undergraduates), those aged 25+ (64 per cent vs 36 per cent of 18-24 year olds), male students (51 per cent vs 42 per cent), and minority ethnic students (53 per cent vs 42 per cent).

    Subject differences were notable – engineering and technology students were most likely to be concerned (64 per cent vs 46 per cent average), while those studying subjects allied to medicine were among the most unconcerned (66 per cent vs 52 per cent average).

    When asked what support they would expect in a closure scenario, 61 per cent anticipated help transferring to another institution and 60 per cent expected support to complete their course. Clear guidance about available options was anticipated by 51 per cent.

    Notably, those unaware of financial risks had higher expectations of support than those who were aware – 71 per cent of those unaware expected transfer support compared to 53 per cent of those aware, and 66 per cent expected course completion support compared to 55 per cent. Females consistently had higher expectations than males across all support measures.

    On priorities for what providers should protect, 56 per cent said quality of teaching, followed by financial support options (47 per cent), student support services (47 per cent), and academic and pastoral support (39 per cent). Female and white students were more likely to prioritise teaching quality (59 per cent vs 52 per cent for male and minority ethnic students respectively).

    Minority ethnic students were distinctive in being equally likely to prioritise financial support options as teaching quality (51 per cent and 52 per cent) – the only demographic subgroup where this was the case. Those unaware of financial risks were consistently more likely to say each aspect should be prioritised than those who were aware.

    Financial hares and promises tortoises

    One way to read this – especially when it’s read in conjunction with the polling OfS published last year on awareness of rights – is that there is a material risk that to achieve savings, providers have been engaged in widespread breach of contract, effectively depending on students not understanding or feeling able to exercise their rights to pull off the changes. The respective risks have been weighed.

    The student on the Clapham Omnibus might both pose that as a hypothesis, and then ask what OfS has done about that serious risk to the student interest. I expect they’d find the relative focus on phonecalls to VC on their financial sustainability with occasional opinion-polled snapshots of the sort here a disturbing comparison for a “student, not provider” interests regulator.

    It’s never unhelpful to have this sort of stuff written down – and if OfS feels it needs to prove (as with its reasonable adjustments and student rights research) that what student leaders have been telling it through panels and its “student debrief” events is true via polling, so be it.

    But notwithstanding the “jam tomorrow” in the blog on what will now be done with the results, there are important questions surrounding priorities, pace and the idea of the student interest during a period of significant cuts.

    I could go back further than this, but a Susan Lapworth board paper from 2019 (then Director of Regulation, now outgoing CEO) identified almost identical concerns – and acknowledged that the regulator’s existing tools were insufficient to address them.

    The paper noted that “students are not clear about what they are buying, in terms of quality, contact time, support, and so on” and that “students’ consumer protection rights are not enforced when what they have been promised, in terms of quality, contact time, support, and so on, is not delivered.”

    It explicitly framed the outcomes the OfS should be seeking in terms of student expectations being met – that teaching quality, contact time, academic support, learning resources and financial support should all be “appropriate and as they had expected.” Six years later, 83 per cent of students report a gap between what they believed they had been promised and the reality.

    The 2019 paper also raised concerns about the burden of enforcement falling on individual students, questioning “whether a model that relies primarily on individual students challenging a provider for a breach of contract places a burden on students in an undesirable way.” It acknowledged that “the contractual relationship between students and providers is unequal” and that “it is not easy for students to identify instances where they have not received the service they were promised and to seek redress.”

    The paper proposed developing new work to address the failings that appears never to have manifested – and the 2025 survey finds a majority of students unaware of student protection plans, and significant proportions now considering dropping out, transferring or deferring as a direct consequence of the gap between expectation and experience.

    Promises, promises

    Even if we put our fingers in our ears over the pandemic, if forward to 2024 and 2025, John Blake’s work on “the student interest” was surfacing identical themes. In a speech to the SUs Membership Services Conference in August 2024, he acknowledged that OfS still has “challenges delivering that centrality of student interest” and that “students have not always felt the confidence in our regulation that they should.” He was explicit about what students were telling him:

    They tell us they are not getting the teaching hours they were promised, they tell us their work is not marked and support offered in a timely fashion, they tell us they don’t have the time resources and opportunities to get involved in the extracurricular activities so prominently featured in the prospectus and crucially, students tell us frequently that if they are unhappy about any of these things too often, their university or college does not respond speedily and effectively.

    Research then published by OfS in February 2025 reinforced the findings – 28 per cent of undergraduates felt contact hours had been insufficient, 32 per cent had issues with how their course was taught, and 40 per cent said financial support was one of the three biggest influences on their success.

    On both teaching and learning and accommodation, mental health support, and the cost of living, students reported consistent shortfalls. Yet the research also highlighted that students feel powerless to seek redress when promises are broken:

    Students are not really given consumers rights, as seen by Covid year students who want money back. If you are given a false promise… there should be a way to complain… but [there] is not really. (Female, 18, further education student, YouGov focus group)

    It is much more difficult to complain, and essentially impossible to claim a refund. (Female, 20, higher education student, YouGov focus group)

    I have a right to get what I was expecting when I signed up for the degree… This means having teaching provision in line with what was advertised. (Female, 20, higher education student, YouGov focus group)

    The 2025 Savanta survey – along with the work it out in December on student awareness of rights – puts numbers to this picture. 83 per cent noticing a gap between promise and reality, a majority unaware of student protection plans, and significant proportions considering dropping out, transferring or deferring. The regulator has known what the problems are for some time.

    Surprise, surprise

    OfS’ own financial sustainability reporting makes clear that the sector’s response to financial pressure is directly affecting students. Its May 2025 report documented a third consecutive year of declining surpluses and liquidity, with modelling suggesting that without mitigating action, up to 200 providers could be in deficit by 2027-28.

    Roundtable discussions with finance directors revealed how providers were responding – providers acknowledged they had “a limited ability to continue to cut costs and maintain value for money for students”, with some having to consider “course closures” and “rationalisation”.

    The report also noted that “students appear to be less prepared for higher education than previously”, resulting in “increasing attrition rates and the need for greater ongoing pastoral support” – support that “often has significant cost implications, requiring specialist trained staff resource.”

    Last March, students told OfS exactly what the survey would later find. OfS it was “at this very early stage of the project” to understand the impact of financial challenges on students, and wanted student “insight and advice” to “shape up the things that we might look at.”

    Libraries had been “cut down” with “reduced hours” – with one student noting they were “unable to use the building after 5pm including the library” meaning “students on placement are heavily affected.” There had been a “reduction in support services and support staff”, with “staffing being pared back to the bare minimum” and classes being cancelled outright.

    The impact on remaining staff was palpable – “less passionate than they used to be, some of the cuts impacting their morale” – resulting in “delay in responding to student queries” and “customer service not as good for students.” Students reported a “reduction in tests for neurodivergent students”, changes to “extension and deferral policies” meaning students were “no longer able to get extensions and deferrals”, and disruptive “changes in supervisors.”

    One student simply noted their “course is now totally different to when I came.” Perhaps most tellingly, students said they were “unable to raise issues” because “staff saying that it’s out of their hands.”

    OfS promised that “your input today will be used to shape our approach to protecting students in the face of University and college cutbacks” and that:

    “this isn’t just an exercise that we will do now and think is really interesting… we will be looking at these and seeing what we can do and we will feedback what we’ve done at a future debrief event.

    You might have thought that almost a year on, another student event on consumer rights would have been a good time to feedback on what was done. Not so much.

    Students on the call were told that OfS would be working on some “real steps” towards meaningfully strengthening student protections – a consultation on changes to the regulatory framework in Easter.

    With time to feed in and the usual months of delay between consultation close and publication, plus some time to make changes, we’re probably looking at 2028.

    Worse – even on issues OfS has previously been clear about – attendees got vague. Asked whether students were entitled to refunds over strike action, the answer was:

    I’m going to not answer that right now because you may know that there is a test case in the courts, by UCL students, about this very issue. Um, so I’m going to wait for the outcome of that.

    Can we imagine Arif Ahmed giving a similar response in the context of Sussex’s judicial review of its fines last year? We cannot.

    I have written a lot of articles on these issues since OfS’ inception. Every so often OfS announces progress, an intention to act, a new way to frame the issues or whatever. Now and again, I get optimistic. But real action has been thin – there’s only so many broken promises before it’s reasonable to conclude that nothing will ever change.

    The time that students really do need protection is not when the sun is shining – it’s when the cuts are on. It is very difficult to avoid the conclusion that OfS is waiting for the financial sustainability horse to fully bolt before it even gets close to closing the student protection stable door.

    Source link

  • Florida Now Accepting Public Comment on H-1B Visa Hiring Ban

    Florida Now Accepting Public Comment on H-1B Visa Hiring Ban

    Florida took another step Thursday toward banning all its public universities from hiring foreign workers on H-1B visas.

    The committee of the state university system’s Board of Governors will now take public comments for two weeks on a proposed prohibition on hiring any new employees on H-1Bs through Jan. 5 of next year. The vote from a committee to further the proposal was a voice vote, with no nays heard from any committee member. The proposal will come back to the full board for a vote after the public comment period ends.

    If enacted, Florida would become the second state to ban the use of H-1B visas at public universities. Texas governor Greg Abbott announced a one-year freeze earlier this week—a move that prompted pushback from faculty.

    The state bans come after President Trump placed a $100,000 fee on new H-1B visa applications in September (international workers who are already legal residents aren’t required to pay the fee). The next month, Florida governor Ron DeSantis ordered the state’s universities to “pull the plug on the use of these H-1B visas.” Fourteen of the Board of Governors’ 17 members are appointed by the governor and confirmed by the state Senate.

    DeSantis complained about professors coming from China, “supposed Palestine” and elsewhere. He added that “we need to make sure our citizens here in Florida are first in line for job opportunities.”

    Universities use the program to hire faculty, doctors and researchers and argue it’s required to meet needs in health care, engineering and other specialized occupations. Some conservatives contend that the program is being abused.

    Discussion about the proposed ban lasted about 15 minutes Thursday, during which no committee member strongly advocated for the policy. Much of the time consisted of the board’s only faculty voting member and its only student voting member—neither of whom are members of the committee—reading off their objections to the move. Among their concerns: university system leaders’ plans to collect information on the H-1B program during the hiring moratorium, instead of collecting the data before making a decision.

    Kimberly Dunn, chair of the statewide Advisory Council of Faculty Senates and the faculty board representative, said institutions and the university system “rely on the H-1B process to recruit world-class talent to our institutions.”

    “Whether it is a pediatric cancer surgeon or a globally recognized researcher, these individuals directly contribute to Floridians’ health, safety and economic success,” Dunn said. “In many cases, the H-1B visa is the only viable pathway for bringing this level of expertise to our state.”

    “Limiting our ability to recruit the very best talent in the world risks undermining the excellence that has positioned our system as a national leader,” Dunn added. She said the reputational damage from the ban could outlast the yearlong moratorium.

    She urged the system to collect the data before pausing hiring new H-1B visa workers.

    Carson Dale, Florida State University’s student body president and chair of the Florida Student Association, said he believes that “American taxpayer dollars should support hiring Americans whenever possible.”

    “Where I part ways is with the mechanism chosen here,” Dale said. “This is not a neutral reform; it is a categorical restriction that determines who we are allowed to consider regardless of who is most qualified.”

    He said the prohibition undermines Florida universities’ commitment to “merit” and goes against other actions that Florida has taken, including scaling back diversity, equity and inclusion initiatives because “we believed they risked interfering with merit-based selection.”

    “This regulation has the practical effect of excluding otherwise highly qualified candidates before individual merit can be assessed,” Dale said. “That matters because the labor market for advanced research talent is global.”

    He said Trump’s $100,000 fee was already implemented “to deter overuse and protect U.S. workers.” He noted Elon Musk, along with other entrepreneurs, came to the U.S. from overseas.

    “Top-tier candidates are not going to pause their careers to wait on a single state,” Dale said. “When Florida removes itself from consideration for an entire hiring cycle, those candidates accept offers elsewhere.”

    Last fiscal year, according to a U.S. Citizenship and Immigration Services database, the federal government approved 253 H-1B visa holders to work at the University of Florida, about 110 each at Florida State University and the University of South Florida, 47 at the University of Central Florida, and smaller numbers at other public institutions.

    Ray Rodrigues, the university system’s chancellor, told the committee that if the H-1B hiring pause is approved, his office and the universities “will be studying the cost of the H-1B program as well as how the program is used by our universities, including identifying the areas where the program is currently being used and whether those areas are of strategic need.”

    He also said the study will look at whether employers have used the program to bring in employees who are “paid less than market wage.” He added that the system plans to work with universities to identify other areas that should be included in the study.

    Alan Levine, chair of the Nomination and Governance Committee, which considered the proposal, appeared to acknowledge the issues that a blunt yearlong ban on H-1B hires could cause.

    “I would encourage the universities—if an issue arises that’s unforeseen, particularly in areas like medical schools, faculty, engineering, where we have contracts with the Defense Department, things like that—where there’s issues that become an issue of concern for you, please bring those to the chancellor so that we can make a decision about how to address it,” Levine said. “We can always bring the group back together again if we need to.”

    “Certainly there are physician shortages, and there’s needs particularly in high-acuity specialties in health care and medicine, and certainly there’s issues in certain STEM areas like engineering, so it’s understood,” Levine said. “The goal here is to collect information.”

    Source link

  • The American people fact-checked their government

    The American people fact-checked their government

     

    This essay was originally published by Persuasion on Jan. 28, 2025.


    On Oct. 17, 1961, tens of thousands of Algerians marched through the streets of Paris in peaceful defiance of a discriminatory curfew imposed by the French state. Police opened fire, beat protesters, arrested them en masse — and, in some cases, threw people into the Seine, where they drowned. Historians later called it “the bloodiest act of state repression of street protest in Western Europe in modern history.” At least 48 — but possibly hundreds — were killed.

    Yet for decades, the official story minimized the violence. The death toll, it was claimed, was three. Police had acted to defend themselves. The protesters were terrorists.

    The French state actively buried the truth. Records were falsified. Evidence suppressed. Investigations blocked. Publications seized. The paper trail was shaped to match the story.

    In 1999, the French Public Prosecutor’s Office concluded that a massacre had taken place, but only in 2012 did President Hollande acknowledge it on behalf of the French Republic. This is the danger of a public sphere without a distributed capacity to challenge official accounts in real time: It is difficult to imagine that the events of Oct. 17 could have been hidden for so long if thousands of protesters and bystanders had carried smartphones, livestreamed the crackdown, and uploaded footage as the bodies hit the water.

    Paris 1961 is a historical warning. Minneapolis 2026 is its modern counterpoint.

    Within hours of the killing of Alex Pretti by federal immigration agents on Jan. 24, top officials attempted to shape the narrative. They placed the blame squarely on the victim, with Secretary of Homeland Security Kristi Noem claiming that Pretti “approached” ICE officers with a gun and was killed after he “violently resisted” attempts to disarm him. White House Senior Advisor Stephen Miller called Pretti “an assassin” who “tried to murder federal agents.” FBI Director Kash Patel said, “You don’t have a right to break the law and incite violence.”

    In other words, Pretti supposedly posed a threat and paid the price.

    But something happened that couldn’t have happened in France in 1961. As bystander footage spread across social media, the official narrative began to collapse. Videos appeared to show a cellphone in one of Pretti’s hands and no gun in the other. Officers also appeared to remove his holstered gun — legally carried — before he was shot several times. It then emerged that Pretti was an ICU nurse with no criminal record — hardly the prototype of a terrorist.

    The official account was clearly at odds with the best available evidence. Four days after the shooting, the Trump administration is already scrambling to save face, cast blame, and “de-escalate” the ICE presence in Minnesota.

    The current obsession with misinformation tends to focus on the public: online mobs, foreign influencers, flaming trolls. But history suggests a more inconvenient truth: in times of crisis, disinformation often comes from above. Governments, including democratic ones, have powerful incentives to shape information. When a state agent shoots a citizen, the response is rarely “Let’s expose ourselves to transparency.” It is often the opposite: to control the narrative, limit scrutiny, discourage dissent, and frame the event in morally legitimizing terms.

    What should our response look like? The Pretti case offered an answer — not only through the videos, but through something else that happened almost simultaneously: the public correction of powerful figures, at scale. Within hours the statements by Miller, Noem, and Patel — and even the official @DHSgov account — had all received Community Notes on X, a platform that, ironically, has become increasingly central to the populist right and is owned by Trump ally Elon Musk.

    This is where social media performs a civic function.

    When platforms label content as “false” in a top-down fashion, many users interpret it as bias — “truth policing” by corporate gatekeepers in cahoots with governments. But the Community Notes system is different. It is crowdsourced, asking volunteers to add context and sources to misleading posts. An open-source algorithm decides which notes become visible, and, crucially, prioritizes notes that gain support from users with different political perspectives. The point is not unanimity — it’s cross-ideological agreement sufficient to clear a threshold of credibility.

    This is what makes bottom-up correction hard to dismiss as partisan censorship. It involves a distributed group of users reaching a form of consensus, often by pointing to credible reporting. It can create a positive feedback loop: journalism supplies verifiable facts; the crowd amplifies and contextualizes them; the overall information environment becomes more resilient.

    Early research into the impact of crowdsourcing is promising. Studies have found high accuracy rates for Community Notes in specific domains like COVID-19 content, and a significant share of notes cite high-quality sources.

    More broadly, crowdsourced fact checking reflects an important principle: when trust in elite institutions collapses, a purely expert-driven model may fail or even backfire. Politically diverse crowds can sometimes do what “authoritative” gatekeepers cannot: persuade skeptics that a correction is legitimate.

    Crowdsourcing is not a silver bullet. The search for a single, decisive fix for disinformation is a “modern mirage” that often serves as a pretext for giving authorities new powers they will inevitably abuse. But the promise of crowdsourcing suggests we should bet on pluralism: multiple, overlapping checks that strengthen the public’s ability to verify claims without empowering any single institution — especially the state — to control the boundaries of permissible speech. The mainstreaming of crowdsourced fact-checking across social media platforms should function as a disincentive to brazen lying by politicians and political influencers.

    In Paris in 1961, the state could suppress evidence, control archives, intimidate media, and deflect until public attention faded. In Minneapolis in 2026, video evidence traveled faster than the official storyline — and distributed networks of verification made it harder for powerful figures to rewrite reality without pushback.

    This is what a free society should aim for: not a perfect public sphere without falsehoods (which has never existed), but a public sphere with enough openness, transparency, and decentralized checking power to ensure that lies — especially from the top — cannot become the permanent record.

    Source link

  • Podcast: Demand, Disabled students, medicine

    Podcast: Demand, Disabled students, medicine

    This week on the podcast we examine what a rise in UK university applicants really tells us about the future demand for higher education.

    With UCAS reporting a 4.8 per cent increase in applications at the January deadline, driven largely by a demographic peak in 18-year-olds, we explore whether this represents a genuine resurgence in demand or a temporary population effect.

    Plus we discuss new evidence on disabled students’ experiences in higher education, including concerns that pandemic-era accessibility is being rolled back, and the implications of the Medical Training (Prioritisation) Bill — from pressure on NHS training places to uncertainty for students studying medicine abroad through UK-linked programmes.

    With Mark Leach, Editor-in-Chief, Wonkhe, Alex Stanley, Vice President for Higher Education at the National Union of Students, Dani Payne, Head of Education and Social Mobility at the Social Market Foundation, David Kernohan, Deputy Editor at Wonkhe and presented by Mark Leach, Editor-in-Chief, Wonkhe.

    You can subscribe to the podcast on Apple Podcasts, YouTube Music, Spotify, Acast, Amazon Music, Deezer, RadioPublic, Podchaser, Castbox, Player FM, Stitcher, TuneIn, Luminary or via your favourite app with the RSS feed.

    On the site

    Universities need to get a grip on reasonable adjustments

    How will the Medical Training (Prioritisation) Bill affect universities and students?

    What does the 2026 January deadline data show?

    Transcript (auto generated)

     

    Source link

  • California prohibits its teachers from talking about a student’s gender identity to their parents. That raises First Amendment concerns.

    California prohibits its teachers from talking about a student’s gender identity to their parents. That raises First Amendment concerns.

    Can a state bar public school teachers from talking to a student’s parents about their minor child’s gender identity without the child’s consent? That’s the question presented by Mirabelli v. Bonta, an ongoing constitutional challenge to a California state policy that prohibits public schools from communicating with parents about their child’s in-school choices regarding their gender identity. 

    In Mirabelli, parents and teachers challenged California’s policy on a variety of constitutional grounds, alleging it violated the teachers’ First Amendment rights to free speech, the teachers’ and parents’ First Amendment right to the free exercise of religion, and the parents’ substantive due process rights under the Fourteenth Amendment. 

    In late December, a federal district court sided with the plaintiffs, issuing a permanent injunction against what it deemed the state’s “policy of secrecy when it comes to a student’s gender identification.” After California appealed, the United States Court of Appeals for the Ninth Circuit stayed the district court’s permanent injunction earlier this month. In turn, the plaintiffs filed for emergency relief from the Supreme Court of the United States, asking it to reinstate the injunction. 

    The district court and Ninth Circuit both focused primarily on the merits of the substantive due process and free exercise claims. Other courts considering similar questions have, too (and have reached varied conclusions). That focus is unsurprising, given the competing interests at play. The religious liberty group Becket argues in its amicus curiae brief to the Supreme Court, for example, that California’s policy interferes with parents’ right to direct the religious upbringing of their children, and therefore the right to free exercise should control the case’s outcome. But as the Court considers the plaintiffs’ request, it’s worth considering a few points about the free expression interests implicated by policies like California’s. 

    A government policy that bars public school teachers from communicating with parents about their children, particularly young children, warrants skepticism. Absent extraordinary circumstances — such as allegations of parental abuse — public schools shouldn’t keep important information about students from their parents. While class is in session, schools operate in loco parentis — in place of the parent. But that authority ends when the bell rings, and it cannot justify state-enforced silence. 

    Muzzling teachers presents problems, too. It’s true that public grade-school teachers are government employees. And when they speak in that capacity, their First Amendment rights generally take a back seat to the government’s interest in educating students. But as the district court correctly recognized, the government’s authority to control what public school teachers may say shouldn’t extend so far as to prevent them from delivering truthful information to parents about the students in their care. 

    Silencing teachers grants the government a dangerous degree of control over what parents know about what’s happening after students walk through the schoolhouse gate. Mandating information black-outs puts teachers in a difficult position; as the district court put it, “when a parent asks directly, the teachers are compelled to avoid answering.” It also sets a troubling precedent. Today, the ban is on gender information; tomorrow, it might bar talking about what K-12 students are (or aren’t) learning in class. And telling students that the government is a trusted keeper of secrets — especially secrets the government promises to keep from parents — teaches them a worrying lesson about the role of the state in our democratic society. 

    Source link

  • 4 policy trends that should be on college leaders’ radars in 2026

    4 policy trends that should be on college leaders’ radars in 2026

    This audio is auto-generated. Please let us know if you have feedback.

    While 2025 may be in the rearview mirror, the policy upheaval that defined the year is not. Higher education experts warn that more disruption lies ahead as the Trump administration continues efforts to reshape the sector, wielding tools ranging from civil rights investigations to regulatory changes. 

    College leaders should brace for more federal government pressure, including through novel avenues, such as accreditation. And they should also expect continued attacks on diversity, equity and inclusion efforts. 

    Below, we’re rounding up four big policy shifts we’ll be watching — and some expert predictions on how they’ll unfold — for the year ahead. 

    Accreditation steps into a limelight it’s not used to

    On the campaign trail, President Donald Trump called college accreditors a “secret weapon” in a war against a higher education system he painted as being rife with “Marxists maniacs,” an unfamiliar level of scrutiny for the field. 

    “It’s not just unusual for Trump, but unusual for any presidential campaign to have a whole speech dedicated to accreditation,” said Jon Fansmith, senior vice president for government relations at the American Council on Education. “It’s generally kind of a quiet corner of policy.”

    As president, Trump signed an executive order in April to reopen reviews of new accreditors at the U.S. Department of Education while blasting existing accreditors’ DEI standards. The order mandated that accreditors require institutions to use program data on student outcomes “without reference to race, ethnicity, or sex.”

    At the same time, the order called for requiring “intellectual diversity” in faculty — a term left undefined in the order but often used as code on the right for hiring more conservatives. 

    The Education Department followed up with guidance aimed at easing the path for colleges seeking to switch accreditors and plans to reshape accreditation regulations this spring. 

    Beyond policymaking, the Trump administration has occasionally sought to pressure institutions through their accreditors. 

    In July, two federal agencies notified Harvard University’s accreditor that the Ivy League institution may no longer meet its accreditation standards. 

    That was based on the administration’s claims that Harvard was “deliberately indifferent” to the harassment of Jewish and Israeli students on its campus claims that a federal judge has found failed to justify funding freezes the government deployed to pressure policy changes at Harvard. 

    The administration used a similar tactic with Columbia University’s accreditor, prior to inking a deal with the university to settle its Title VI investigations. 

    Some accreditors have made changes favored by the Trump administration. The WASC Senior College and University Commission, New England Commission of Higher Education and American Psychological Association have permanently or temporarily dropped DEI standards for institutions.

    The stakes for institutional and academic independence are high. “They’ve been trying to force institutions to adopt policies and make choices that align with their viewpoints, and that’s a big problem,” Fansmith said. He added that the country has never used accreditors “as a tool for implementing the political views of the party in power.”

    More practical questions hang in the air about the Trump administration’s plans, including its push to recognize new accreditors. 

    “Would they have the same standards applied to them as they would for other accreditors?” asked Nasser Paydar, president of the Council for Higher Education Accreditation, which lobbies for the sector. “That’s a big unknown.” 

    He noted that accreditors would welcome additional competition given the size of the higher ed field. “There’s room for it

    Paydar also pointed to the administration’s emphasis on student and graduate outcomes. 

    “The department is indicating they want to make sure accreditors focus on student outcomes. It’s wonderful,” he said. But he also pointed to different student outcomes among different types of colleges and programs. The country needs teachers and social workers, for example, but they tend to earn less.

    Specifics about how the administration plans to incorporate outcome standards into accreditation remains unknown. “We want to find out as to how they’re planning to do this because whatever that is is going to influence how universities behave and going forward,” Paydar said. 

    Source link

  • AI in edtech: The 2026 efficacy imperative

    AI in edtech: The 2026 efficacy imperative

    Key points:

    AI has crossed a threshold. In 2026, it is no longer a pilot category or a differentiator you add on. It is part of the operating fabric of education, embedded in how learning experiences are created, how learners practice, how educators respond, and how outcomes are measured. That reality changes the product design standard.

    The strategic question is not, “Do we have AI embedded in the learning product design or delivery?” It is, “Can we prove AI is improving outcomes reliably, safely, and at scale?”

    That proof now matters to everyone. Education leaders face accountability pressure. Institutions balance outcomes and budgets. Publishers must defend program impact. CTE providers are tasked with career enablement that is real, not implied. This is the shift from hype to efficacy. Efficacy is not a slogan. It is a product discipline.

    What the 2026 efficacy imperative actually means

    Efficacy is the chain that connects intent to impact: mastery, progression, completion, and readiness. In CTE and career pathways, readiness includes demonstrated performance in authentic tasks such as troubleshooting, communication, procedural accuracy, decision-making, and safe execution, not just quiz scores.

    The product design takeaway is simple. Treat efficacy as a first-class product requirement. That means clear success criteria, instrumentation, governance, and a continuous improvement loop. If you cannot answer what improved, for whom, and under what conditions, your AI strategy is not a strategy. It is a list of features.

    Below is practical guidance you can apply immediately.

    1. Start with outcomes, then design the AI

    A common mistake is shipping capabilities in search of purpose. Chat interfaces, content generation, personalization, and automated feedback can all be useful. Utility is not efficacy.

    Guidance
    Anchor your AI roadmap in a measurable outcome statement, then work backward.

    • Define the outcome you want to improve (mastery, progression, completion, readiness).
    • Define the measurable indicators that represent that outcome (signals and thresholds).
    • Design the AI intervention that can credibly move those indicators.
    • Instrument the experience so you can attribute lift to the intervention.
    • Iterate based on evidence, not excitement.

    Takeaways for leaders
     If your roadmap is organized as “features shipped,” you will struggle to prove impact. A mature roadmap reads as “outcomes moved” with clarity on measurement, scope, and tradeoffs.

    2. Make CTE and career enablement measurable and defensible

    Career enablement is the clearest test of value in education. Learners want capability, educators want rigor with scalability, and employers want confidence that credentials represent real performance.

    CTE makes this pressure visible. It is also where AI can either elevate programs or undermine trust if it inflates claims without evidence.

    Guidance
    Focus AI on the moments that shape readiness.

    • Competency-based progression must be operational, not aspirational. Competencies should be explicit, observable, and assessable. Outcomes are not “covered.” They are verified.
    • Applied practice must be the center. Scenarios, simulations, troubleshooting, role plays, and procedural accuracy are where readiness is built.
    • Assessment credibility must be protected. Blueprint alignment, difficulty control, and human oversight are non-negotiable in high-stakes workflows.

    Takeaways for leaders
    A defensible career enablement claim is simple. Learners show measurable improvement on authentic tasks aligned to explicit competencies with consistent evaluation. If your program cannot demonstrate that, it is vulnerable, regardless of how polished the AI appears.

    3. Treat platform decisions as product strategy decisions

    Many AI initiatives fail because the underlying platform cannot support consistency, governance, or measurement.

    If AI is treated as a set of features, you can ship quickly and move on. If AI is a commitment to efficacy, your platform must standardize how AI is used, govern variability, and measure outcomes consistently.

    Guidance
    Build a platform posture around three capabilities.

    • Standardize the AI patterns that matter. Define reusable primitives such as coaching, hinting, targeted practice, rubric based feedback, retrieval, summarization, and escalation to humans. Without standardization, quality varies, and outcomes cannot be compared.
    • Govern variability without slowing delivery. Put model and prompt versioning, policy constraints, content boundaries, confidence thresholds, and required human decision points in the platform layer.
    • Measure once and learn everywhere. Instrumentation should be consistent across experiences so you can compare cohorts, programs, and interventions without rebuilding analytics each time.

    Takeaways for leaders
    Platform is no longer plumbing. In 2026, the platform is the mechanism that makes efficacy scalable and repeatable. If your platform cannot standardize, govern, and measure, your AI strategy will remain fragmented and hard to defend.

    4. Build tech-assisted measurement into the daily operating loop

    Efficacy cannot be a quarterly research exercise. It must be continuous, lightweight, and embedded without turning educators into data clerks.

    Guidance
    Use a measurement architecture that supports decision-making.

    • Define a small learning event vocabulary you can trust. Examples include attempt, error type, hint usage, misconception flag, scenario completion, rubric criterion met, accommodation applied, and escalation triggered. Keep it small and consistent.
    • Use rubric-aligned evaluation for applied work. Rubrics are the bridge between learning intent and measurable performance. AI can assist by pre scoring against criteria, highlighting evidence, flagging uncertainty, and routing edge cases to human review.
    • Link micro signals to macro outcomes. Tie practice behavior to mastery, progression, completion, assessment performance, and readiness indicators so you can prioritize investments and retire weak interventions.
    • Enable safe experimentation. Use controlled rollouts, cohort selection, thresholds, and guardrails so teams can test responsibly and learn quickly without breaking trust.

    Takeaways for leaders
    If you cannot attribute improvement to a specific intervention and measure it continuously, you will drift into reporting usage rather than proving impact. Usage is not efficacy.

    5. Treat accessibility as part of efficacy, not compliance overhead

    An AI system that works for only some learners is not effective. Accessibility is now a condition of efficacy and a driver of scale.

    Guidance
    Bake accessibility into AI-supported experiences.

    • Ensure structure and semantics, keyboard support, captions, audio description, and high-quality alt text.
    • Validate compatibility with assistive technologies.
    • Measure efficacy across learner groups rather than averaging into a single headline.

    Takeaways for leaders
     Inclusive design expands who benefits from AI-supported practice and feedback. It improves outcomes while reducing risk. Accessibility should be part of your efficacy evidence, not a separate track.

    The 2026 Product Design and Strategy checklist

    If you want AI to remain credible in your product and program strategy, use these questions as your executive filter:

    • Can we show measurable improvement in mastery, progression, completion, and readiness that is attributable to AI interventions, not just usage?
    • Are our CTE and career enablement claims traceable to explicit competencies and authentic performance tasks?
    • Is AI governed with clear boundaries, human oversight, and consistent quality controls?
    • Do we have platform level patterns that standardize experiences, reduce variance, and instrument outcomes?
    • Is measurement continuous and tech-assisted, built for learning loops rather than retrospective reporting?
    • Do we measure efficacy across learner groups to ensure accessibility and equity in impact?
    Latest posts by eSchool Media Contributors (see all)

    Source link