Blog

  • EEOC Accuses Penn of Defying Subpoena

    EEOC Accuses Penn of Defying Subpoena

    Jumping Rocks/Universal Images Group/Getty Images

    The U.S. Equal Employment Opportunity Commission accused the University of Pennsylvania of waging an “intensive and relentless public relations campaign” to avoid complying with a subpoena related to an investigation into antisemitism, The Philadelphia Inquirer reported.

    The EEOC made the claim in a court filing on Monday, accusing the Ivy League university of impeding its investigation. The federal government is seeking information about Jewish students and employees, which the EEOC claims is necessary to identify potential victims and witnesses.

    The federal government initially requested the list in July.

    Penn, in a Jan. 20 legal filing, called the request an “extraordinary and unconstitutional demand that Penn assemble and produce lists of employees that reveal their Jewish faith or ancestry, associations with Jewish organizations, affiliation with Jewish studies, participation in programming for the Jewish community and/or de-anonymized responses to surveys on antisemitism, alongside their personal home addresses, phone numbers, and emails.”

    The federal government’s request for a list of Jewish students and employees has unnerved some in the university community who doubt the Trump administration is acting in good faith.

    Jewish students, scholars and the Penn chapter of the American Association of University Professors have pushed back on the EEOC’s demand. The national AAUP and AAUP-Penn filed a motion to intervene in the case, noting concerns about disclosing private information.

    “As our motion notes, to demand that Penn create and compile lists of Jewish people—particularly those active in political causes disfavored by the government—evokes the disturbing history of twentieth-century antisemitism. Beyond Penn itself, this lawsuit has serious implications for the religious liberties of AAUP members nationwide, as well as their right to engage in speech and scholarship without the threat of ideological conformity,” the AAUP wrote in a news release. “The government does not have the right to demand the private, personal information of religious minorities. We are waiting for the court’s decision on our motion.”

    Source link

  • Time to Eliminate Prereq Developmental Ed for All Students

    Time to Eliminate Prereq Developmental Ed for All Students

    In 2026, postsecondary education is under greater scrutiny than at any point in recent memory. There is a growing public sentiment that the high cost of postsecondary education doesn’t yield adequate value for students and their families. Too few students earn postsecondary credentials that result in economic and social mobility to justify the cost.

    This sentiment persists despite a national movement over the past 15 years to improve postsecondary attainment rates. Great progress has been made at many institutions and in postsecondary systems, but wide-scale adoption of evidence-based reforms remains elusive. At the top of the list of reforms that can reduce college costs and contribute to improved college attainment rates, but that has yet to be scaled nationwide, is developmental education reform.

    There is more evidence than ever that eliminating prerequisite developmental education and implementing highly effective reforms is essential to improving student success rates. The Community College Research Center at Columbia University’s Teachers College has found that comprehensive student success reforms like improved advising or guided pathways won’t reach their full potential without first scaling developmental education reform.

    Overwhelming evidence shows that eliminating prerequisite developmental education, adopting corequisite math and English, aligning gateway math courses to programs of study, and using high school GPA for gateway math and English placement results in accelerated student progress to a degree. Despite the undeniable evidence of impact, these reforms have not been fully scaled. The report “Incomplete: The Unfinished Revolution in College Remedial Education” assessed the current state of reform:

    The nation has made extraordinary progress over the past 15 years in defining the shortcomings of traditional remediation … and implementing reform at scale. But there has been stagnation and even reversals of promising reforms.”

    Kim, 2024

    For decades, prerequisite developmental education courses have negatively impacted postsecondary completion. Further, prerequisite courses have exacerbated inequities in gateway course completion and student success. The prerequisite approach increases time to degree and the cost of a postsecondary education and reduces the likelihood that students will complete a gateway math and English course. It is time to pull prerequisite developmental education courses from every course catalog in the nation.

    Research has shown that students, regardless of their preparation level, are more successful when placed directly in college-level math and English courses and receive additional corequisite support than students placed into prerequisite courses (The RP Group & California Community Colleges, 2025; Tennessee Board of Regents, 2025). Institutions using prerequisite developmental education courses have first-year gateway math course completion rates at around 20 percent and 45 percent in gateway English. These completion rates have doubled or tripled in states, systems and institutions that have eliminated prerequisite developmental education and implemented evidence-based reforms.

    Despite the undeniable success of developmental education reforms, few states and systems have adopted policy to scale the reforms. Seven states—California, Connecticut, Georgia, Nevada, Kansas, Louisiana and Texas—have comprehensive policies to scale these reforms at all postsecondary institutions.

    In the other 43 states, hundreds of thousands of students are still being placed into highly ineffective remedial courses. This past week, Strong Start to Finish, a network of state and national postsecondary leaders that works with states and postsecondary systems to scale developmental education reforms, released a new blueprint for the full-scale reform of developmental education in states.

    The 2026 Core Principles for Transforming Developmental Education Reform provide the field with the most up-to-date guidance to implement and scale the reforms. First appearing in 2012, with subsequent versions in 2015, 2020 and now 2026, the core principles build on more than a decade of evidence and research on reforms that have been rigorously tested at the state and system level.

    Strong Start has made the principles the cornerstone of their North Star goal: “All states ensure students are on track to graduate after their first year by 2040.” In 2026, Strong Start is launching a new campaign to engage policymakers, system leaders and practitioners in the critical work of scaling and sustaining developmental education reform. This campaign seeks to scale reforms for all postsecondary students in the nation.

    At a time when postsecondary education needs to demonstrate its value, the full-scale adoption and implementation of developmental education reforms are a foolproof way to reduce the costs for students and remove an unnecessary hurdle to a credential. Scaling developmental education reforms is absolutely necessary to improve outcomes for students and build greater public trust in postsecondary education.

    Source link

  • L.A. Community Colleges Boost Work-Based Learning

    L.A. Community Colleges Boost Work-Based Learning

    Deysi Perez was still in high school when she completed a college dental assisting program, earning an industry-recognized certificate and securing a job in the field—a pathway made possible through the workforce-development efforts at West L.A. College.

    Today, Perez, who first enrolled through the institution’s concurrent enrollment program—which allows high school students to take free classes on a community college campus—is continuing her studies toward becoming a dental hygienist.

    Andrea Rodriguez-Blanco, career center director at West L.A. College, said Perez is one of many students who have benefited from the college’s focus on work-based learning and career readiness.

    “We have students like [Perez] who are a testament to our work and who have really taken advantage of our services and support systems,” Rodriguez-Blanco said.

    Despite California’s significant investments in K–12, higher education and workforce development, Rodriguez-Blanco said the lack of coordination among them can leave students struggling.

    To address this, West L.A. College works in partnership with Compton College, El Camino College and Los Angeles Southwest College to create a regional, cross-sector strategy that expands career opportunities for community college students in Los Angeles County’s Second District.

    Matthew Jordan, interim president at West L.A. College, said the initiative originated during the pandemic.

    “The colleges had a large amount of both federal and state funds coming in to help us deal with the challenges of COVID,” Jordan said, noting that Keith Curry, president of Compton College, contacted California Competes to brainstorm ways to use the funds to benefit students and improve career-readiness programs.

    As a result of that conversation, Curry and California Competes, a nonpartisan organization focused on research and policy to improve the state’s higher education and workforce-development systems, brought other neighboring colleges into the discussion.

    “What surfaced was working on pathways to better the lives of residents in our community,” Jordan said. “How do we make career readiness transparent? How do we make it a campuswide responsibility?”

    The strategy: Since the partnership began in 2021, Rodriguez-Blanco said the four colleges have met quarterly to compare approaches to work-based learning and identify ways to collaborate.

    A key focus has been mapping the industries and employers each college works with. When programs overlap, the colleges coordinate outreach so employers don’t have to repeat the same conversation with multiple institutions.

    “By really looking at what we have in common and what our strengths are as a region, we can scale and have a bigger impact on our programs,” Rodriguez-Blanco said. “We’ve shared how each campus uses work-based training, how it’s integrated into our college … and found common ground so that whatever we do at [West L.A. College] can be easily replicated across other programs.”

    Jordan said this approach is important because “employers don’t necessarily want to be contacted separately by five colleges to have the same conversation five times.”

    “If we all have a similar program, we can approach the employer and build out the pipeline and work-based learning opportunities together in one process,” Jordan said.

    He added that the colleges have found it particularly useful to collaborate in fields such as biotechnology, artificial intelligence, child development and information technology.

    The stakes: Rodriguez-Blanco said the partnership helps amplify each college’s work-based learning programs while making it easier for students across all the campuses to access career opportunities.

    “The reason why these colleges came together is because we found that we had a really strong work-based learning support system,” Rodriguez-Blanco said. “We’ve already been bringing employers to the table, but how do we triple the effect?”

    Jordan said the partnership is important because students at their community colleges often face barriers to academic success, from food insecurity to long commutes to balancing family responsibilities. The initiative provides more pathways for students to participate in work-based learning and career programs while still in school, making it easier to gain practical experience while managing their schedules.

    “This program really seeks to address that issue of access to work-based learning,” Jordan said, noting that a specific goal of the partnership is to increase the number of paid internship opportunities, since community college students often don’t have the ability to take on unpaid internships.

    “If we can structure the work-based learning experience as part of their coursework, or ensure that it’s a paid internship, I think that really helps address one of the multiple barriers that students are facing,” he added.

    Ultimately, Jordan said, there is a lot of value for institutions in sharing practices when it comes to work-based learning.

    “Sometimes there’s a tendency to be elbowing each other, like we’re all fighting for the same opportunities,” Jordan said. “I would encourage colleges to abandon that attitude and really think about how they can work together to leverage the limited resources we have and benefit the communities we serve.”

    Get more content like this directly to your inbox. Subscribe here.

    Source link

  • N.C. Students Sue Election Officials Over Early Voting Sites

    N.C. Students Sue Election Officials Over Early Voting Sites

    Kena Betancur/Getty Images

    College Democrats of North Carolina and some North Carolina students sued state and local election officials Tuesday in an effort to restore three early-voting sites on college campuses. Republican-controlled election boards voted to reject the early-voting sites ahead of the 2026 primary in March.

    The lawsuit centers on denied sites at North Carolina A&T State University, North Carolina’s largest historically Black university; the University of North Carolina at Greensboro; and Western Carolina University. (An early-voting site at Elon University was also rejected.) The lawsuit says North Carolina A&T students fought for their on-campus early-voting site ahead of the 2020 election. Western Carolina University and UNC Greensboro had operated such sites since 2016 and at least 2012, respectively.

    The plaintiffs argue that the closures “intentionally target the rights of young voters.”

    “This case is about targeted efforts to place additional, unnecessary, burdensome, and ultimately unjustifiable obstacles between students at three North Carolina universities—including the nation’s largest historically Black university—and this fundamental constitutional right,” the lawsuit says.

    Despite the closures, the state will have 10 on-campus early-voting sites in total this year, up from nine in 2022, The Raleigh News & Observer reported.

    Source link

  • From goodwill to obligation: making equity systemic in higher education

    From goodwill to obligation: making equity systemic in higher education

    This blog was kindly authored by Afzal Munna, Senior Lecturer at the University of Hull London.

    There is a familiar pattern in higher education equity debates. Persistent disparities are acknowledged. Frameworks, strategies and guidance documents proliferate. Responsibility is shared so widely that, in practice, it is owned by no one. Progress depends on goodwill, isolated champions and optional initiatives – many of which work locally, briefly, or unevenly.

    Equity in higher education is a widely endorsed in principle, but inconsistently delivered in practice, and too often framed as a problem of individual deficit rather than structural design.

    The problem with deficit thinking

    Much of the equity work in universities is implicitly micro-focused. Students are supported to be more confident, more resilient and more academically prepared. Study skills workshops, mentoring schemes and induction programmes are rolled out – often with good intentions and positive local effects. But, this framing assumes the core system is neutral and that inequity emerges primarily from student characteristics. Evidence suggests otherwise. For migrant and international students in particular, access to higher education has improved significantly, yet participation, belonging and continuation remain uneven. The issue is not simply who students are, but how institutional routines interact with their identities. Equity problems, in other words, are not located in students. They are produced through misalignments between learners’ lived experiences and the assumptions embedded in curriculum design, pedagogy, assessment and institutional policy.

    Equity as a systemic property

    My doctoral research responds to this challenge through the Intersectional–Multilevel Equity (IME) model, which reframes equity as a system-level outcome rather than a collection of individual interventions.

    The IME model identifies three interdependent levels of the higher education system:

    • Micro: students’ relationships, sense of belonging, language use, confidence and access to social resources.
    • Meso: curriculum design, pedagogy, assessment practices, induction, mentoring and classroom culture.
    • Macro: institutional strategies, resourcing, workload allocation, regulatory frameworks and staff development.

    Equity emerges – or fails – through the alignment (or misalignment) of these layers. Treating any one level as sufficient is an error.

    What the evidence shows

    Using a mixed and integrative methodology this research made visible how intersecting identities (such as migration status, language background and socio-economic positioning) operate differently across educational contexts.

    Several patterns stood out.

    First, mentoring and peer-support schemes were most effective when embedded within inclusive curriculum practices, rather than positioned as compensatory add-ons. Where mentoring was disconnected from teaching and assessment design, its impact was limited. Second, assessment transparency mattered more than assessment type. Clear exemplars, scaffolded tasks and explicit success criteria significantly reduced linguistic and cultural disadvantage, without lowering academic standards.

    Third, staff development focused on equity was most impactful when linked to everyday teaching decisions – not abstract principles. Lecturers reported increased confidence navigating diverse classrooms when institutional signals, workload planning and professional learning were aligned.

    Finally, student voice was not simply consultative but corrective. Interventions designed with students better reflected lived barriers and produced more relevant solutions.

    Why alignment matters

    The central lesson is that equity initiatives succeed or fail based on cross-level coherence. When micro-level support (such as study skills provision) is separated from meso-level curriculum design, students are asked to adapt to systems that remain unchanged. When macro-level strategies set ambitious targets without resourcing or accountability mechanisms, implementation becomes symbolic rather than structural. Systemic change requires resetting the baseline – from optional good practice to expected responsibility. In higher education, equity is too often treated as aspirational rather than operational.

    From aspiration to obligation

    This is not an argument for micromanagement or rigid standardisation – a systemic approach to equity does not script teaching. Instead, it establishes a minimum floor of responsibility, below which institutions should not fall.

    Practically, this means:

    • Co-designing induction and mentoring with students to support early belonging.
    • Auditing assessments for fairness and transparency, not merely outcomes.
    • Embedding students’ cultural and linguistic experiences into disciplinary teaching.
    • Linking equity-focused staff development to programme-level planning.
    • Ensuring institutional strategies are resourced, evaluated and visible in classroom practice.

    None of these actions are radical in isolation. Their transformative potential lies in their alignment.

    Conclusion

    Equity in higher education will not be achieved through isolated initiatives, goodwill alone or perpetual pilots. Like other forms of safety and responsibility, it requires systemic expectation, structural coherence and institutional ownership. The IME model does not offer a silver bullet. It offers a reframing: equity as something institutions do by design, not something students must overcome by resilience. Until higher education stops treating equity as optional, we will continue to recognise the problem, reinvent partial solutions – and find ourselves exactly where we have been before.

    Source link

  • The debate over AI in education is stuck. Let’s move it forward in responsible ways that truly serve students

    The debate over AI in education is stuck. Let’s move it forward in responsible ways that truly serve students

    by Maddy Sims, The Hechinger Report
    January 29, 2026

    Artificial intelligence is already reshaping how we work, communicate and create. In education, however, the conversation is stuck.

    Sensational headlines make it seem like AI will either save public education (“AI will magically give teachers back hours in their day!”) or destroy it completely (“Students only use AI to cheat!” “AI will replace teachers!”).

    These dueling narratives dominate public debate as state and district leaders scramble to write policies, field vendor pitches and decide whether to ban or embrace tools that often feel disconnected from what teachers and students actually experience in classrooms.

    What gets lost is the fundamental question of what learning should look like in a world in which AI is everywhere. And that is why, last year, rather than debate whether AI belongs in schools, approximately 40 policymakers and sector leaders took stock of the roadblocks in an education system designed for a different era and wrestled with what it would take to move forward responsibly.

    Related: A lot goes on in classrooms from kindergarten to high school. Keep up with our free weekly newsletter on K-12 education.

    The group included educators, researchers, funders, parent advocates and technology experts and was convened by the Center on Reinventing Public Education. What emerged from the three-day forum was a clearer picture of where the field is stuck and a shared recognition of how common assumptions are holding leaders back and of what a more coherent, human-centered approach to AI could look like.

    We agreed that there are several persistent myths derailing conversations about AI in education, and came up with shifts for combating them.

    Myth #1: AI’s biggest value is saving time for teachers

    Teachers are overburdened, and many AI tools promise relief through faster lesson planning, automated grading or instant feedback. These uses matter, but forum participants were clear that efficiency alone will not transform education.

    Focusing too narrowly on time savings risks locking schools more tightly into systems that were never designed to prepare students for the world they are graduating into.

    The deeper issue isn’t how to use AI to save time. It’s how to create a shared vision for what high-quality, future-ready learning should actually look like. Without that clarity, even the best tools quietly reinforce the same factory-model structures educators are already struggling against.

    The shift: Stop asking what AI can automate. Start asking what kinds of learning experiences students deserve, and how AI might help make those possible.

    Myth #2: The main challenge is getting the right AI tools into classrooms

    The education technology market is already crowded, and AI has only added to the noise. Teachers are often left stitching together core curricula, supplemental programs, tutoring services and now AI tools with little guidance.

    Forum participants pushed back on the idea that better tools alone will solve this problem. The real challenge, they argued, is to align how learning is designed and experienced in schools — and the policies meant to support that work — with the skills students need to thrive in an AI-shaped world. An app is not a learning model. A collection of tools does not add up to a strategy.

    Yet this is not only a supply-side problem. Educators, policymakers and funders have struggled to clearly articulate what they need amid a rapidly advancing technology environment.

    The shift: Define coherent learning models first. Evaluate AI tools based on whether they reinforce shared goals and integrate with one another to support consistent teaching and learning practices, not whether they are novel or efficient on their own.

    Myth #3: Leaders must choose between fixing today’s schools and inventing new models

    One of the tensions dominating the discussions was whether scarce state, local and philanthropic resources should be used to improve existing schools or to build entirely new models of learning.

    Some participants worried that using AI to personalize lessons or improve tutoring simply props up systems that no longer work. Others emphasized the moral urgency of improving conditions for students in classrooms right now.

    Rather than resolving this debate, participants rejected the false choice. They argued for an “ambidextrous” approach: improving teaching and learning in the present while intentionally laying the groundwork for fundamentally different models in the future.

    The shift: Leaders must ensure they do not lose sight of today’s students or of tomorrow’s possibilities. Wherever possible, near-term pilot programs should help build knowledge about broader redesign.

    Myth #4: AI strategy is mainly a technical or regulatory challenge

    Many states and districts have focused AI efforts on acceptable-use policies. Creating guardrails certainly matters, but when compliance eclipses learning and redesign, it creates a chilling effect, and educators don’t feel safe to experiment.

    The shift: Policy should build in flexibility for learning and iteration in service of new models, not just act as a brake pedal to combat bad behavior.

    Myth #5: AI threatens the human core of education

    Perhaps the most powerful reframing the group came up with: The real risk isn’t that AI will replace human relationships in schools. It’s that education will fail to define and protect what is most human.

    Participants consistently emphasized belonging, purpose, creativity, critical thinking and connection as essential outcomes in an AI-shaped world.

    But they will be fostered only if human-centered design is intentional, not assumed.

    The shift: If AI use doesn’t support students’ connections between their learning, their lives and their futures, it won’t be transformative, no matter how advanced the technology.

    The group’s participants did not produce a single blueprint for the future of education, but they came away with a shared recognition that efficiency won’t be enough, tools alone won’t save us and fear won’t guide the field.

    Related: In a year that shook the foundations of education research, these 10 stories resonated in 2025

    The question is no longer whether AI will shape education. It is whether educators, communities and policymakers will look past the headlines and seize this moment to shape AI’s role in ways that truly serve students now and in the future.

    Maddy Sims is a senior fellow at the Center on Reinventing Public Education (CRPE), where she leads projects focused on studying and strengthening innovation in education.

    Contact the opinion editor at [email protected].

    This story about AI in education was produced byThe Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Hechinger’s weekly newsletter.

    This <a target=”_blank” href=”https://hechingerreport.org/opinion-ai-education-responsible-ways-serve-students/”>article</a> first appeared on <a target=”_blank” href=”https://hechingerreport.org”>The Hechinger Report</a> and is republished here under a <a target=”_blank” href=”https://creativecommons.org/licenses/by-nc-nd/4.0/”>Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src=”https://i0.wp.com/hechingerreport.org/wp-content/uploads/2018/06/cropped-favicon.jpg?fit=150%2C150&amp;ssl=1″ style=”width:1em;height:1em;margin-left:10px;”>

    <img id=”republication-tracker-tool-source” src=”https://hechingerreport.org/?republication-pixel=true&post=114551&amp;ga4=G-03KPHXDF3H” style=”width:1px;height:1px;”><script> PARSELY = { autotrack: false, onload: function() { PARSELY.beacon.trackPageView({ url: “https://hechingerreport.org/opinion-ai-education-responsible-ways-serve-students/”, urlref: window.location.href }); } } </script> <script id=”parsely-cfg” src=”//cdn.parsely.com/keys/hechingerreport.org/p.js”></script>

    Source link

  • A statutory duty of care won’t fix student safeguarding. Implementation will

    A statutory duty of care won’t fix student safeguarding. Implementation will

    Calls for a new statutory duty of care for universities are politically understandable.

    But they misdiagnose the problem. Higher education institutions already operate within a dense framework of duties. The real challenge is inconsistent interpretation, messy system boundaries, and a lack of confidence that existing obligations are being applied well.

    The idea has a certain inevitability to it. Something goes wrong, a student is harmed, and the instinct is to reach for a simple mechanism to prevent recurrence – make universities legally responsible, in black and white, and they will do better.

    It’s an emotionally compelling proposition. It’s also operationally misdirected.

    Providers in England already sit within a complex set of responsibilities – common law duties, duties under the Equality Act, safeguarding responsibilities, health and safety requirements, and regulatory obligations under various regulators.

    The question is not whether universities have responsibilities. They do. The critical question is why, despite this, we still see uneven practice, disputes about accountability, and low confidence in how student risk is identified and managed.

    The answer is not “no duty”. The answer is that existing duties are applied inconsistently, in a system where the boundaries between universities and statutory services are blurred, and where misunderstandings – particularly around information sharing and existing statutory responsibilities – lead to poor outcomes.

    Understandably, any sort of reliance upon the common law duty of care is subject to significant criticism from those advocating for the establishment of a statutory duty. The common law duty describes an obligation to take reasonable steps to prevent foreseeable harm. Determining whether such a duty exists only takes place after harm has taken place – by a court.

    Whether such a duty exists in a given case is legally complex and typically assessed using tests such as the Caparo test, which considers foreseeability, proximity, and whether it is fair, just, and reasonable to impose a duty. Applying these principles to universities is not straightforward. Universities are therefore expected to support students through reasonable and proportionate measures without assuming responsibilities that would sit with parents, guardians, or statutory agencies.

    When taking into account the complexity and lack of utility of the common law duty, alongside the extensive statutory framework which also applies to universities, the result is a distinctive legal environment. Universities serve diverse adult populations, alongside under-18s, but unlike schools they operate without a single, clearly codified safeguarding framework. Instead, they must interpret duties drawn from multiple statutes not designed with higher education in mind, creating ongoing uncertainty about the scope of responsibility and appropriate institutional action.

    When people argue for a statutory duty of care, they are often trying to solve a real problem – inconsistency, variability, and the fear that lessons are not being learned. But adding a new duty on top of existing duties doesn’t automatically solve any of that. It may simply create a new label for old failures.

    Consistency, not law

    Across the sector, there are examples of excellent safeguarding and student support practice. There are also approaches that are fragmented, under-developed, or overly dependent on individual staff confidence and local culture.

    That variation isn’t surprising. Universities vary in size, mission, student demographics, commuter and residential patterns, and how student support is organised. Yet student risk doesn’t respect those organisational differences. Similar situations recur across institutions, whether that’s a student disclosing self-harm, serious mental ill health emerging during study, domestic abuse, harassment, estrangement, homelessness, substance misuse, or an acute crisis requiring rapid multi-agency response.

    In practice, universities interpret what counts as “reasonable steps” in these contexts in very different ways. The result is a postcode lottery of threshold decisions, escalation routes, information-sharing practices, and approaches to family engagement.

    If we want fewer tragedies and better safeguarding, that inconsistency is where attention should go.

    The key test is whether a statutory duty would change behaviour in the real world, not whether it’s morally satisfying or politically saleable.

    A simple logic model helps. If the input is a new statutory duty, what activities does that reliably trigger across a diverse sector, and what outputs follow?

    In reality, many institutions would respond first by doing what organisations do under legal uncertainty – protect themselves. That typically means commissioning legal advice, rewriting policies, expanding documentation, formalising risk panels and escalation structures, increasing training, and narrowing discretion for staff. Those steps may look like progress, but they don’t guarantee the outputs that matter – earlier identification of risk, faster access to appropriate support, clearer multi-agency roles, and consistent, competent decision-making.

    Adult autonomy

    There’s a further complication – universities educate adults, in the main. Safeguarding in higher education sits in a different ethical and legal terrain from schools. Students have autonomy, privacy rights, and legal capacity. Any duty that implicitly encourages intrusive interventions risks undermining those principles.

    It’s not hard to imagine how this plays out. If staff feel they are being held liable for “clinical risk” they aren’t trained or resourced to manage, universities may respond by tightening “fitness to study” pathways, increasing coercive expectations of engagement, or relying more heavily on exclusionary decisions. A system can become safer on paper while becoming less humane in practice.

    Such challenges cannot be overcome should a duty be established similar to that which exists between employers and employees within the UK. The duty of care owed by employers to employees is historically and doctrinally grounded in control, dependency, and risk creation. Employers organise work, design systems, set hours, provide equipment, and direct conduct within environments that employees cannot realistically redesign or avoid.

    The risks in question within such a relationship are by nature of the employment relationship itself and arise from how labour is structured and managed. The employer therefore exercises operational control over both the hazard and the conditions of exposure, and the employee’s autonomy is correspondingly constrained by the terms of such a contract. It is this combination that makes a statutory duty of care coherent – responsibility follows control, and compliance can be expressed through preventable standards that can be enforced.

    Such logic doesn’t translate well to the relationship between universities and students. Universities don’t organise students’ lives in the way employers organise work. They don’t create most of the risks associated with student mental distress – with such distress pre-existing for many – and they don’t exercise continuous control over students’ conduct, environment, or health.

    Students are, mostly, autonomous adults, free to enter and leave, to refuse support, and to make decisions that fall outside institutional direction. Mental health crises and suicide are not products of an institutional process to the same degree as unsafe systems of work, but outcomes of complex, largely external causal factors.

    To impose an employer-style duty of care on universities would therefore sever the historical link between control and responsibility, transforming a limited educational relationship into a quasi-guardianship role that neither reflects the reality of university authority nor provides a mechanism capable of preventing the harms in question.

    The statutory services fault line

    One of the most persistent issues in university safeguarding is the interface with the NHS, police, and local authorities.

    Universities are not clinical or social care providers. They don’t diagnose, treat, or detain. Further, they aren’t responsible to assess whether an individual requires interventions under the Children Act 1989 or the Care Act 2014. Yet universities increasingly find themselves holding such risks because statutory services are inaccessible, thresholds are high, or pathways are unclear.

    In that environment, calls for a statutory duty can operate as a kind of displacement activity. We focus on the institution that is visible and convenient – the university – rather than confronting the structural reality that serious mental ill health and immediate risk should be met by a functioning health and care system.

    If the policy debate drifts into implying that universities should compensate for failures elsewhere, we create a perverse dynamic. Universities are pressured to hold risk without the tools, expertise, or remit to treat it. A statutory duty wouldn’t repair that boundary problem. It could entrench it, by reinforcing the assumption that universities are the default responsible actor when the problem is, at least in part, capacity and accountability in statutory services.

    Information-sharing myths

    Another recurring operational barrier is misunderstanding about information sharing. It’s common to see both universities and statutory agencies citing “GDPR” as a reason not to share information, sometimes correctly, often not.

    The outcome is predictable. Statutory agencies lack visibility of risk that would help them intervene. Universities delay escalation for fear of breaching privacy. Families are excluded by default even where engagement could be lawful and helpful. Decisions become inconsistent and dependent on individual staff confidence.

    A statutory duty of care doesn’t automatically clarify any of this. What would help is clear, scenario-based guidance that gives staff confidence about what they can share, when, and why – paired with shared protocols between universities and statutory partners.

    If not a new duty, what then?

    If we accept that the problem is implementation rather than legal absence, the agenda becomes more practical and more likely to improve outcomes. The Office for Students (OfS), as the regulator for higher education in England, is best-placed – and empowered by s.5 of the Higher Education and Research Act 2017 – to move the sector forward.

    First, we need clearer guidance on applying existing duties. That means sector-wide clarity on what “reasonable steps” look like in common scenarios – escalation of risk, reasonable adjustments, safeguarding triage, placement risk, serious incident response, and family engagement. Not vague principles, but operational guidance that supports consistent practice.

    Second, we need stronger regulatory assurance and minimum expectations. If students and families have low confidence, it isn’t enough to say duties exist. There must be visible assurance that compliance is tested and that poor practice has consequences. That requires auditable expectations around competence, escalation models, governance, and learning after serious incidents.

    Finally, we need to repair relationships with the NHS, police, and local authorities. Universities need reliable multi-agency pathways with shared thresholds, named points of contact, agreed protocols, and clarity on who does what when risk escalates. This is the work that prevents universities being left holding clinical risk in isolation. However, it isn’t enough to direct providers to “form partnerships with NHS partners” in their area. Barriers to working alongside statutory agencies exist at all sides of the conversation, with students sometimes being treated as second-class citizens where agencies identify them as attending university.

    A logic model asks the question – do the above proposals plausibly connect the problem being experienced to the outcomes being sought? When applied to concerns about student mental wellbeing and safeguarding, the core problems consistently identified are inconsistent practice, uncertainty among staff, fragmented institutional responses, and low confidence in accountability. These are problems of interpretation, application, and system coordination, rather than problems of legal absence.

    Regulatory guidance grounded in existing legal and regulatory frameworks passes this test because it intervenes directly at the point where failure most often occurs – how duties are understood and enacted in practice. Unlike new legislation, guidance doesn’t merely restate responsibility. It articulates expectations, clarifies thresholds, and translates legal principles into operational decision-making frameworks that institutions can actually use.

    From a logic-model perspective, the pathway is clear. Guidance changes the inputs available to institutions – shared definitions, common standards, and authoritative interpretation – which enables different activities, including consistent training, aligned policies, clearer escalation routes, and more confident engagement with external services. These activities generate tangible outputs – improved decision-making consistency, better documentation of safeguarding judgements, and more coherent institutional responses. Crucially, these outputs are logically and directly linked to the intended outcomes – safer practice, improved student experience, greater confidence for families, and stronger regulatory assurance.

    Regulatory guidance also strengthens the wider system. By setting explicit expectations, it supports regulators to assess performance, challenge poor practice, and intervene. It enables universities to design support structures that align with both legal duties and statutory services. It creates a shared language across higher education, health, and social care, which is essential for early intervention and partnership working. Guidance aligns institutional behaviour, regulatory oversight, and cross-sector coordination around the same operational understanding.

    Regulatory enforcement of such guidance would be essential to overcome the “voluntary” nature of a majority of existing good practice. OfS has recently exercised its authority to require providers in England to comply with their set standards under the E6 Condition of Registration. It’s still too early to determine the effectiveness of this measure but, if universities become fundamentally safer as a result, have we effectively seen a pilot of actually improving understanding of duty of care within higher education?

    We’ll be here again

    As the debate of 13 January 2026 was framed as “should we legislate or not”, we got predictable positions and limited progress. The government’s response relying upon the work of the Higher Education Mental Health Implementation Taskforce (HEMHIT) and Student Minds’ University Mental Health Charter has already been proven to not fully reassure advocates for a statutory duty.

    Therefore, I fully anticipate government to find itself responding to the issue again. A more useful question would have been what would make safeguarding practice more consistent across the sector, and what would make statutory partners more reliable when risk escalates.

    The uncomfortable point is that the most effective reforms aren’t headline-grabbing. They’re about guidance, assurance, governance, competence, and multi-agency capacity.

    If the goal is fewer harms and better safeguarding, the most meaningful response is not a new duty. It’s requiring that the duties universities already have are applied consistently, and that they aren’t left substituting for statutory services that are meant to lead.

    Source link

  • You can’t give students a voice. They already have one

    You can’t give students a voice. They already have one

    Every university claims to listen to students.

    There are surveys, panels, and partnership charters. But the more we talk about “student voice,” the more it feels like a mirage — visible from afar, shining with good intentions, but evaporating when you try to reach for it.

    As a design-thinking researcher and former student-engagement officer, I’ve spent years inside the machinery of voice work – consultations, focus groups, strategy meetings.

    What I’ve learned is we are listening without shifting power. Student voice has become a ritual of responsiveness, not a practice of reciprocity.

    In the past decade, universities have professionalised listening. We’ve built complex infrastructures for gathering feedback, but not for sharing authorship.

    Voice work has become a compliance exercise — a measure of satisfaction rather than a method of transformation.

    The problem isn’t silence – students are speaking all the time. The issue is translation: when their insights are filtered through layers of policy language and institutional defensiveness until they lose their edge. What began as dialogue becomes data.

    In the process, we confuse being consulted with being heard.

    Feeling Safe vs Seen

    To make someone feel safe is to create ‘space’ for them. To make someone feel seen is to invest the time, patience, and trust to understand how they want to use that space and why.

    I saw this difference clearly while working on a student–staff co-creation project. In the early stages, student researchers often waited for permission – Was their contribution valid enough? Were they saying the right thing?

    But as trust deepened, something shifted. They began designing the process itself, suggesting new methods, reframing priorities, and challenging assumptions about what “impact” looked like.

    That shift is where genuine belonging begins. When people feel seen, they stop performing inclusion and start practising it.

    Much of voice work is framed around safety: creating inclusive spaces where students feel comfortable to speak. Safety is vital, but it’s a starting point, not the goal.

    A culture of care must also create space for being seen; to be recognised, credited, and empowered to influence change. This is what a reimagined approach to student voice looks like: not just making room at the table, but re-designing the table itself.

    Why the need? And why now?

    Since the pandemic, the sector has experienced what I like to call survey fatigue — an exhaustion with being asked, again and again, to give feedback that rarely seems to matter.

    The first time I realised this was when I was working on a project that required collecting feedback from students and staff across all faculties, asking them questions about how “seen” they currently feel in the curriculum of their chosen course at university.

    I tried everything to get people to fill in forms and surveys, from emails and Teams messages to social media shout-outs, newsletter drop-ins, and society collaborations. Nothing worked.

    This is when I realised that survey fatigue is real. Engagement today feels more transactional than ever, and trust is at an all-time low.

    Across the sector, “partnership” frameworks sound progressive but often operate like consultation in disguise. Students are invited to advise on projects already designed, to validate choices already made. Participation without power becomes a performance of inclusion.

    The challenge is not how to listen better, but how to design differently. Reimagining student voice means asking who sets the terms of engagement; whose definitions of value, tone, narrative and time frame dominate decision-making. Until those change, inclusion risks being aesthetic rather than structural.

    Reimagining through friendship

    In my work, I’ve turned to what I call friendship interviews — a methodology borrowed from participatory and feminist research traditions. Instead of formal interviews, these are conversations between peers who already share trust, care, or curiosity.

    The format shifts the dynamic from extraction (“tell me your story so I can record it”) to reciprocity (“let’s reflect on what this means for both of us”).

    I would like to acknowledge that this method that I draw on — friendship interviews — has existed for centuries in indigenous, Global Majority, and marginalised community contexts, often as storywork, collective reflection, and relational accountability.

    Although these practices have recently sprung to the surface and gained legitimacy through Western research frameworks and white voices in HE, it is important to acknowledge that its roots lie in the relational storytelling practices of marginalised and indigenous communities, where reciprocity and care have always been forms of research.

    When students interview each other this way, the knowledge that emerges is intimate, emotional, and often invisible to traditional evaluation frameworks. These are not side stories but actual indices of impact, evidence that relationships are changing the culture of learning.

    Friendship interviews make visible what quantitative feedback flattens – the behavioural and relational changes that sustain inclusion.

    From mirage to movement

    The mirage of student voice will only fade when we stop chasing reflections and start building relationships. So what can we do?

    For starters, stop using phrases like “giving students a voice”. Because you cannot give someone what they already have. Hence to make student voice meaningful again, you must start listening differently; stop treating voice as sound and start treating it as structure.

    That means listening differently, through methods that centre care and reciprocity, such as friendship interviews. It means evaluating relationally, capturing change through stories and behaviours, not just metrics.

    Understand the intersectionality of student journeys is important too. Who do your stories speak to and who do they isolate? And designing beyond the pilot is crucial, embedding co-creation into the everyday fabric of teaching and governance.

    It’s also about getting comfortable with the uncomfortable. Voice work is all about embracing vulnerability. What worked? What didn’t work; what did you get wrong? Why? What are you doing about it? Trust is built in accountability; not in numeric displays of perfection.

    Universities must realise that they are not Bob-the-Builders. They can’t just magically fix problems that students are facing through a one-off pilot project or a series of surveys.

    But they can invest in understanding student journeys, not just through highlights but as whole, lived experiences.

    Real voice work begins when you start moving beyond the safety of what you already know, and invest in the seen-ness and clarity that comes with embracing relationality, storytelling, and power-distribution.

    Source link

  • Universities need to get a grip on reasonable adjustments

    Universities need to get a grip on reasonable adjustments

    Part of the problem in covering the annual Disabled Students UK Access Insights report is that I’ve written this article before.

    Quotes like this could be from any year:

    That repeated exposure – of disclosing vulnerabilities and trying to convince others my difficulties are real – is exhausting, humiliating and, at times, traumatic.

    Students shouldn’t have to repeatedly advocate just for their course to be accessible – this is really draining.

    You are pretty much left to your own devices and feel that you would need to fight for everything.

    This is DSUK’s third annual survey – drawing on over 1,100 responses from disabled students across more than 110 UK higher education providers. They point to a stubborn, systemic problem that goodwill alone isn’t fixing.

    And the context is challenging. The sector is under significant financial pressure, disability services are stretched, and pandemic-era accessibility gains that transformed many disabled students’ experiences are being rolled back to “improve belonging” or drive up attendance.

    It poses uncomfortable questions – if attitudes toward disabled students have genuinely improved, and the data suggests they have, why isn’t that translating into reliable support?

    What would it actually take to close the gap between the adjustments universities agree to provide and the adjustments they actually deliver? And is the entire model of individualised reasonable adjustments – administered centrally, implemented locally, monitored by nobody – fundamentally broken?

    The danger is that (yet) another report that wags the finger lands on a sector with little “spare” capacity to act. But maybe the framing of delivering adjustments into that “spare” space is part of the problem.

    Handle with care

    Before getting into the findings, some important caveats. The DSUK survey is the largest disabled student-led dataset in UK higher education, and its consistency across three years gives it real value – but it is not, and does not claim to be, representative of all disabled students.

    Respondents are disproportionately engaged with disability support systems – 87 per cent have at least one adjustment agreed, compared with perhaps 30 per cent of declared disabled students sector-wide, and over half are in receipt of DSA.

    Only 2 per cent have not declared their disability to their institution. The students least connected to support – those who never disclosed, never engaged, dropped out early – are barely represented, making this a survey of the system’s users, not its avoiders.

    Year-on-year comparisons also require caution. Several key questions have moved from three-point to five-point Likert scales in 2025, which mechanically reduces “neither agree nor disagree” responses and inflates both agreement and disagreement.

    When the report shows satisfaction rising from 53 per cent to 61 per cent, or “having all support needed” jumping from 38 per cent to 52 per cent, we can’t really tell how much reflects real improvement versus methodological artefact.

    I’d add in passing that quite why an independent student organisation has to carry out this research when other countries manage to do this via state-sponsored bodies is beyond me.

    In the Netherlands, for example, the ECIO (Expertisecentrum Inclusief Onderwijs) is a national expertise centre focused on accessibility, support and inclusion for students in Dutch post-secondary education, and commissions and publishes research on disabled students and those with chronic conditions, and translates findings into institutional tools and guidance. Its national reports are impressive, and infrastructure of this sort enables focused work on tricky areas like internships.

    The good news is real

    Let’s start with what’s working, because the picture isn’t uniformly grim.

    Cultural change is happening. Four in five disabled students have not been made to feel unwelcome by staff, and that figure has improved. Eighty-four per cent have encountered at least one positive attitude from staff, nearly 70 per cent have heard staff say “it’s ok to need support”, and the proportion reporting that staff treat adjustments as “mere suggestions” has declined steadily across all three survey years.

    Students are more likely to report that institutions understand invisible disabilities, that mental health problems are taken seriously, that accessibility is seen as a shared responsibility.

    I’m really pleased with everything. I started as a mature student lacking all formal education, and slowly worked my way up from a level 2 science course right to PhD… The disability support really had the capacity to make or break my achievements.

    Disability services work when they’re accessible. Eighty per cent of students who met a disability advisor found them knowledgeable and helpful, 72 per cent with support plans felt empowered to influence the content, and when one-to-one support is delivered, 74 per cent find the staff skilled. The problem isn’t competence – it’s capacity, reach, and what happens after the plan leaves disability services’ hands.

    Some universal design measures are embedding. Seventy per cent of students receive slides in advance of lectures most of the time, extensions are becoming easier to access, and uncapped resits are slowly spreading.

    These aren’t dramatic numbers, but they represent real, sustained progress on provisions that benefit large numbers of students without requiring individual negotiation.

    Institutional proof points exist. London South Bank University delivers 63 per cent of agreed adjustments fully, against a sector average of 44 per cent, while the Universities of Bath and Kent achieve over 90 per cent lecture recording rates. These aren’t institutions with unusual student populations – they’re mainstream providers making different operational choices, and what’s achievable somewhere is, in principle, achievable everywhere.

    The bad news is structural

    But the good news sits alongside findings that ought to concern the sector. The report’s central message is that the gap between policy and practice is not closing.

    Eighty-seven per cent of disabled students have adjustments agreed, but only 44 per cent report that all those adjustments are actually delivered. Sixty-three per cent go without adjustments at least some of the time because chasing them takes too much energy, and 34 per cent have to chase up adjustments more than half the time:

    I have always needed to chase up adjustments due to university lack of resources and/or understanding of accessibility in general by facilities teams, timetabling via the access support staff and/or my head of department.

    Getting this help requires email after email between the departments who seem to communicate very poorly with each other.

    As the report puts it:

    The result is a system in which access exists on paper but fails in practice. This is not a failure of individual staff goodwill. It is the predictable outcome of systems that rely on disabled students to monitor, prompt, and repair the delivery of their own access.

    The process of getting support is itself disabling. Fifty-seven per cent have to explain the same thing about their disability repeatedly to different staff, 42 per cent experienced delays from disability services that negatively affected them, and even students who submitted evidence more than two months before term started – only 59 per cent had their support plan in place when their course began:

    Neurodiverse and disabled individuals should not be the ones bearing the responsibility for educating everyone else in this sector.

    Pandemic gains are being rolled back. Remote attendance options have fallen to just 24 per cent, lecture recording has plateaued at 61 per cent – down from pandemic highs – and for students with fluctuating conditions, the return to inflexible in-person requirements has been particularly damaging:

    When I was unable to attend an in-person meeting and asked them to do an online meeting they refused saying it would be ‘too difficult’ which I found really frustrating as during COVID pandemic everyone was using online meetings.

    And in many cases, assessment remains inaccessible. Forty-six per cent believe they’ve received a lower mark because assessment wasn’t accessible to them, 55 per cent say assessment style doesn’t let them demonstrate their knowledge effectively, and only 14 per cent have access to alternative assessment forms for a majority of their assessments:

    Stammering also makes some forms of oral assessment difficult and stressful – I know that I’ve got worse grades in some areas of my course due to a perceived departmental unwillingness to change the setting of these assessments to accommodate me.

    The EHRC says that rapid escalation routes are one way a university can be compliant – but only 28 per cent know how to make a formal complaint, and of those who escalated an issue, only 32 per cent felt heard. Twenty-one per cent were treated worse after raising an access issue, and 40 per cent of complaints reaching the Office of the Independent Adjudicator in 2024 were from disabled students:

    I have no idea how to make a complaint without being singled out.

    My experience has been that concerns and formal complaints are shut down (or ignored entirely) rather than opportunities taken to learn and put things right.

    Thirty-six per cent have been discouraged from or refused an adjustment, and the reasons given are as upsetting as in other years – “not fair to other students” (46 per cent), “requirement of the course” (42 per cent), “if you get this everyone will ask” (31 per cent), “calls into question your fitness to study” (29 per cent):

    I have been threatened repeatedly with ‘fitness to practice’ as my health condition deteriorated and really they should have been looking at ways to support me.

    It all has health impacts. Forty-one per cent report their physical health negatively affected by undertaking their course, 54 per cent report negative mental health impacts, and one in five have previously left, changed course, interrupted, or gone part-time specifically due to inaccessibility:

    My experiences in requesting support and adjustments on this course and getting it in a timely and acceptable manner, and sometimes not getting it, have been generally traumatic and have negatively impacted my mental and physical health.

    Why change is so hard

    I’ve banged on before here about the default framing of student-raised issues as the “nice to haves”. The cultural assumption is often that everything is OK, and that what students say in an SSLC or a survey is usually a great idea if we find the time or resource.

    I’ve also banged on before here about the problem with that framing when it comes to disabled students’ legal right to be able to access their education and not be put at a disadvantage to their nondisabled peers. These are not luxuries.

    But there’s no point just finger wagging. The pattern the data shows – cultural improvement outpacing systemic improvement – is not accidental. It reflects something structural about how universities work, and about the design of the reasonable adjustments model itself.

    Service quality research distinguishes between attributes that delight when present (satisfiers) and attributes that infuriate when absent (dissatisfiers). Staff attitudes, feeling welcomed, sense of belonging – these are satisfiers, and investment here shows results.

    Actually delivering agreed adjustments, physical access, working complaints processes – these are dissatisfiers, where absence causes harm but presence is simply expected.

    The sense is that the sector is investing in satisfiers while the dissatisfiers remain broken – but you can’t delight your way out of failing on fundamentals.

    A friendly lecturer who doesn’t implement your adjustments is still failing you, and the warm email from disability services doesn’t help if the support plan never reaches your department.

    There’s also an accountability vacuum. If we apply basic deterrence logic to a department that doesn’t deliver agreed adjustments, the question is – what’s the expected cost of non-compliance?

    Expected cost = Probability of detection × Magnitude of sanction

    Both for universities being regulated by EHRC and OfS, and for departments being regulated by their “centre”, I expect that the probability of detection is near zero – the system relies entirely on disabled students noticing, chasing, escalating, and only 17 per cent ever escalate. Of those who do, 47 per cent have nothing resolved.

    The magnitude of sanction is also zero – the report identifies no consequences anywhere, not for individual staff, not for departments, not for institutions.

    Hence expected cost = Near-zero × Zero = Zero.

    The rational institutional and departmental response is to ignore adjustment requests unless the student makes enough noise. That’s effectively what the data shows.

    We could interrogate the results from the perspective of basic quality management thinking. If that was being applied, we’d see the documentation of processes, monitoring of whether they’re followed, the tracking of failures, the implementation of corrective actions, and the reporting of patterns to management. That is what ISO 9001 certification would require – but I suspect is usually absent.

    If universities applied it to adjustment delivery, questions would arise. Is there a documented process for implementation at department level? Often no, because responsibility is “diffused across the institution with no single point of ownership.” Is delivery monitored? No – “only 15 per cent of providers have an established process for evaluating effectiveness,” according to OfS. Are failures tracked? No – the system relies on complaints most students don’t make. Does senior management see it? No – “accessibility competes with other priorities and loses.”

    If universities are, in effect, regulators of their own departments, there’s little evidence of an inspection function, poor definition and understanding of compliance standards, little monitoring data, and no consequences for failure.

    Power asymmetries run through the report. Disability services often can’t compel academic departments – they’re typically a service function, not a regulatory function, so they advise but don’t enforce. Academic departments have autonomy – disability services don’t have authority over them.

    So who does? A pro-Vice-Chancellor (Education)? Quality assurance? The report’s recommendation to “assign clear senior ownership” is exactly this – someone with actual power over departments needs to own accessibility and use that power. But that requires data flowing to them (it doesn’t), accessibility in their objectives (it usually isn’t), and willingness to have difficult conversations with heads of department (the path of least resistance is don’t).

    The strategic failure

    But the deeper problem goes beyond implementation to the model itself.

    If over 60 per cent of your disabled students need recorded lectures, that’s not an “adjustment” – that’s baseline provision failing. If a university is writing “extensions available” into thousands of individual support plans, it has designed inflexible assessment and is papering over it student by student. When disability services are spending their time on casework that could be prevented by better universal design, they’re deploying a specialist service as a workaround for systemic inaccessibility.

    The report notes that 87 per cent of declared disabled students have at least one adjustment agreed, but it doesn’t break down which adjustments are most common – though we know from sector experience that some provisions appear repeatedly, including slides in advance, recorded lectures, extra time, rest breaks, extensions, and quiet exam rooms. These are not niche accommodations for unusual conditions – they’re mass requirements being processed as individual exceptions.

    The Equality Act’s anticipatory duty requires institutions to plan ahead for reasonable adjustments, to anticipate the barriers students with various impairments will face and design them out. Most universities know how many disabled students they have and roughly what conditions they have, but few have a strategic analysis of what adjustments they’re making repeatedly, at scale, that could be converted to universal provision.

    As I argued with Meg Darroch back in 2022, this is where regulatory leverage could help. If providers were required to analyse their own adjustment data, identify high-frequency accommodations, and demonstrate plans to convert them to anticipatory provision, things could shift – but currently they’re not even collecting data in a way that would enable the analysis.

    The disability service is not solely responsible for disabled students – all staff are.

    The report quotes this student advice approvingly. But making that real requires more than cultural change – it requires systems that surface where provision is failing, that hold departments accountable, that convert repeated individual accommodations into universal design, and that push universities to act as regulators of their own practice with the monitoring, enforcement, and consequences that implies.

    Eight priorities

    DSUK sets out eight priorities for institutions:

    • Make delivery of agreed support non-negotiable – clear assignment of responsibility, monitoring mechanisms, rapid resolution routes, consequences for repeated failures
    • Create dedicated capacity to coordinate and monitor universal design – not in disability services or time-limited working groups, but embedded in quality assurance with power to influence design
    • Resource disability services to advise, not compensate – align staffing with demand, protect advisory time, clarify their role as partners in design rather than sole owners of access
    • Reduce administrative burden as a matter of access – minimise evidence requests, integrate systems, design processes that work when students are unwell
    • Make escalation safe, easy, and effective – clear information, no-penalty guarantees, systematic learning from complaints
    • Assign clear senior ownership and governance – named senior leader, integration into quality assurance and risk management, transparent reporting to governing bodies
    • Protect progress in times of financial pressure – prioritise low-cost high-impact actions, avoid rolling back what works, assess cost-cutting for disability impact
    • Measure what matters – delivery rates not just agreement rates, time to support, resolution of failures, departmental variation

    These are sensible recommendations, carefully framed to be achievable even under financial pressure. But they’re also familiar – the sector has been told what to do for years, and the question is what happens when institutions don’t do it.

    The regulatory gap

    The report is notably diplomatic toward regulators. It references OfS data showing only 15 per cent of providers have established processes for evaluating reasonable adjustments, but doesn’t call for regulatory intervention to change this. It notes that 40 per cent of OIA complaints come from disabled students, but doesn’t ask what EHRC or OfS should do about such a clear pattern.

    This restraint is probably strategic – DSUK works with institutions and needs constructive relationships. But someone needs to say the harder thing – the current regulatory posture is not working.

    OfS tracks outcomes – registration conditions could require monitoring of delivery rates, could mandate reporting to governing bodies, could set intervention thresholds. They don’t.

    EHRC has strategic enforcement powers that exist precisely for patterns of systemic failure across a sector, and 40 per cent of ombudsman complaints from one protected characteristic group is exactly that pattern. The Abrahart case established that duties don’t depend on students completing processes – has EHRC tested whether institutions are applying that principle? Where are the section 20 investigations?

    The deterrence literature is clear – credible threat of enforcement improves compliance even without widespread actual enforcement. One high-profile investigation, one regulatory intervention with real consequences, would shift institutional risk calculations across the sector. Right now the threat isn’t credible, and everyone knows nothing happens.

    The timing matters. Universities are making financial decisions right now that will shape disabled students’ experience for years, disability services are under pressure, and pandemic-era provisions are being reviewed. The report’s message – that cutting accessibility increases long-term legal and reputational risk – needs to land with governing bodies and finance directors, not just disability practitioners.

    But the harder question is whether anything structural will change. The sector has had reports, guidance, good practice examples, and legal clarifications for years, and while attitudes have improved, systems haven’t. The burden continues to fall on disabled students to make broken processes work.

    The blatant disregard for the Equality Act 2010 and the institution’s Public Sector Equality Duty is not merely an oversight – it is institutional discrimination. Where is the accountability? Where is the leadership upholding inclusion as more than a performative statement?

    That question deserves an answer. The report provides more evidence – what it can’t provide is the will to act, at institutional level, at regulatory level, at governmental level. It’s time for those in positions of power – both nationally and institutionally – to earn their big management bucks, diagnose the delivery issue in detail and deploy some meaningful strategy.

    Source link

  • Authentic Content for Online Programs: Proof-Driven Ideas

    Authentic Content for Online Programs: Proof-Driven Ideas

    Reading Time: 11 minutes

    Fully online programs are no longer emerging alternatives. They are established, competitive, and increasingly scrutinized. Prospective students understand that online learning is widely available. What they question is whether a specific program is credible, engaging, supportive, and capable of delivering real outcomes.

    This shift has fundamentally changed how institutions must approach online program marketing. Generic messaging, polished stock imagery, and surface-level claims no longer build confidence. Today’s prospects are looking for proof. They want to understand what learning actually looks like, who they will interact with, how support works in practice, and what outcomes they can realistically expect after graduation.

    This is where authentic content for online programs becomes one of the most powerful enrollment drivers available. Authenticity reduces perceived risk, shortens decision cycles, and builds trust in online learning long before a prospect ever speaks with an admissions advisor.

    At Higher Education Marketing, we see this pattern repeatedly. Institutions that invest in proof-driven storytelling—grounded in real student experiences, outcomes, and transparency—consistently outperform those that rely on abstract promises. This guide breaks down how to create creative content for online courses that earns trust, clarifies the learning experience, and supports sustainable enrollment growth.

    What “Authentic Content” Really Means in Online Program Marketing

    Authenticity in education marketing is often misunderstood. It does not mean being informal, unpolished, or casual with institutional branding. Authentic content is defined by credibility, specificity, and verifiability. In short, it must sound true—and be true.

    A useful test: if your content could appear on another institution’s website with little or no change in meaning, it isn’t authentic enough.

    Authentic content for online programs should:

    • Demonstrate how learning actually works in your online environment
    • Feature real students, instructors, and support staff—not stock representations
    • Address both benefits and challenges of online learning
    • Set clear expectations around workload, timelines, and outcomes
    • Support claims with concrete examples or data

    Authenticity also means answering the questions prospective students are already asking. What is faculty engagement like online? How often do students interact with peers? What support exists if they fall behind? Will this credential lead to real career opportunities?

    When institutions answer these questions clearly—and support them with evidence such as course previews, alumni outcomes, or faculty welcome videos—they build trust by default. This type of content does not rely on slogans. It earns confidence by being specific, transparent, and grounded in lived experience.

    How do you make online programs feel real to prospective students?
    Show how learning actually happens. LMS walkthroughs, assignment previews, and real student stories turn an abstract promise into a tangible experience.

    Are you looking for education marketing services?

    Our expert digital marketing services can help you attract and enroll more students!

    Why Trust Is the Primary Conversion Barrier for Fully Online Programs

    Unlike on-campus programs, fully online offerings must build trust without physical presence. Prospective students cannot tour facilities, attend in-person events, or casually meet faculty. Every credibility signal must come through digital touchpoints.

    This creates a trust gap—and it is often the biggest barrier to conversion.

    Common concerns include:

    • Will I feel isolated?
    • Are instructors accessible and engaged?
    • Will employers value this credential?
    • What academic and career support will I receive?
    • Can I realistically balance this program with work and family life?

    Institutions must address these concerns directly. Not with reassurance, but with evidence. Trust-building content should reduce uncertainty at every stage of the funnel—from search and program pages to nurture emails and application follow-up.

    Effective trust signals include:

    • Online-specific student success stories
    • Transparent explanations of course structure and faculty engagement
    • Visible instructor presence
    • Clear depictions of peer interaction and community
    • Outcome data supported by alumni or employer validation

    Trust is not built with a single asset. It requires consistency. When credibility signals appear throughout the student journey, confidence grows—and conversions follow.

    The Proof Stack: Seven Trust Signals That Convert Online Prospects

    High-performing online program marketing is built on proof, not promises. Leading institutions deploy a layered proof stack—a coordinated system of content assets designed to address specific student concerns.

    Each layer removes friction. Together, they create clarity.

    1. Outcome Proof

    Show what happens after graduation by highlighting graduate success stories within your authentic content for online programs. Share specific, verifiable outcomes such as job placements, promotions, salary growth, licensure results, or portfolio examples. Concrete evidence like this does far more to build trust in online learning than broad claims about “career readiness.”

    2. Experience Proof

    Show the learning environment itself. LMS screenshots, sample assignments, course modules, and weekly schedules demystify the experience and reduce anxiety.

    3. Faculty Proof

    Instructor quality matters deeply online. Introduce faculty as active educators. Short videos, interviews, or Q&As explaining feedback style and engagement expectations build confidence.

    4. Support Proof

    Demonstrate how advising, tutoring, technical help, and career services function for online students. Testimonials describing real support moments are especially powerful.

    5. Community Proof

    Isolation is a common fear. Counter it with visible evidence of interaction: discussion boards, cohort models, group projects, and virtual events.

    6. Credibility Proof

    Accreditation, rankings, partnerships, and employer recognition reinforce legitimacy—especially when tied specifically to online offerings.

    7. Integrity Proof

    Be honest. Clarify who the program is—and is not—for. Address time commitments and expectations openly. Transparency builds credibility faster than perfection.

    No single asset builds trust alone. The strongest strategies distribute proof across the funnel, allowing evidence—not persuasion—to do the work.

    Creative Content for Online Courses: Proof-Driven Ideas That Scale

    Effective online course marketing is not about flashy production. It’s about relevance. Creativity in this context means answering real questions with clarity and evidence.

    Scalable, high-impact ideas include:

    • “A week in the life” student profiles
    • LMS walkthrough videos
    • Assignment-to-career skill explainers
    • Faculty office-hour previews
    • Student decision-journey testimonials
    • Alumni outcome spotlights
    • Discussion board or live session previews
    • Short FAQ videos addressing workload and flexibility

    Each asset should serve a single purpose: reduce doubt and build confidence. When content answers real concerns with real proof, it becomes both creative and effective.

    What types of content build trust fastest for fully online courses?
    Proof-focused content—outcomes, faculty presence, support visibility, and clear expectations—outperforms general promotional messaging.

    Showcasing the Online Learning Experience (Without Overproduction)

    Prospective students do not need cinematic videos. They need visibility.

    Screen recordings, narrated walkthroughs, and lesson previews are effective storytelling formats because they show what learning actually looks like. A simple LMS tour or assignment walkthrough answers practical questions and builds familiarity.

    When video is not possible, authentic storytelling can still be delivered through:

    • Written graduate success stories that highlight real outcomes
    • Anonymized learning journey case studies that show progress over time
    • Instructor-led lesson explanations that clarify teaching style and expectations
    • Platform demos that reveal how students engage with course materials
    • Audio interviews that capture candid student or faculty perspectives

    Clarity beats polish. When students can visualize the experience through effective storytelling, uncertainty fades—and confidence follows.

    Online Student Testimonials That Feel Credible

    Strong testimonials follow a narrative structure:

    1. Starting point: Who is the student, and why did they enroll?
    2. Challenge: What concerns or obstacles did they face?
    3. Support moment: Where did the institution make a difference?
    4. Outcome: What changed as a result of the program?
    5. Advice: What would they tell future students?

    Avoid anonymous praise. Specificity builds trust. Include names, programs, timelines, and real outcomes whenever possible.

    What makes an online testimonial credible?
    Context, specificity, and lived experience. Avoid generic statements and overly polished language.

    Online Learning Community Building: Making Connection Visible

    Community is one of the most questioned—and misunderstood—aspects of fully online education. Prospective students often assume that without physical proximity, meaningful connection is limited or nonexistent. If they cannot see interaction, collaboration, and peer engagement, they assume it simply does not exist.

    This perception represents a major emotional barrier to enrollment. While flexibility and access attract interest, uncertainty around belonging and support often stalls decision-making. In online education, absence of visible community is interpreted as absence of community itself.

    This is where intentional storytelling becomes critical.

    To counter skepticism, institutions must actively show how students connect, collaborate, and support one another throughout the online learning experience. Community cannot be implied; it must be demonstrated through clear, observable proof points embedded across program pages, content hubs, and recruitment campaigns.

    Effective community-focused storytelling does not rely on vague claims about “engagement” or “collaboration.” Instead, it makes interaction tangible by revealing how connection actually unfolds in day-to-day learning.

    High-impact examples include:

    • Screenshots or short clips of live class sessions with visible discussion, questions, and instructor facilitation
    • Real examples of group projects, including collaboration tools (shared documents, discussion threads, virtual workspaces)
    • Clear overviews of mentorship programs, highlighting how peer mentors, alumni, and faculty interact with students
    • Spotlights on student-led initiatives, clubs, or virtual events that extend beyond coursework
    • Evidence of consistent instructor presence through discussion board participation, feedback examples, and guided conversations

    When presented well, this type of storytelling reframes online learning from a solitary experience into a shared academic journey. Prospective students begin to visualize themselves participating—not passively consuming content, but actively engaging with peers, instructors, and a broader learning network.

    Crucially, visible community reduces one of the most powerful emotional objections to online education: the fear of going through the experience alone. When connection is made explicit, confidence replaces hesitation.

    Real-World Examples From Prestigious Institutions 

    Harvard Business School Online: HBS Online emphasizes learner outcomes and authenticity by showcasing real student success stories and measurable results. On its site, the school highlights how its certificate programs lead to tangible career advancements – learners report job promotions, salary increases, and career transitions as a direct result of the online courses. The inclusion of learner testimonials and outcome data builds credibility, allowing prospective students to see the real-world impact of HBS Online’s programs.

    HEM 1HEM 1

    Source: Harvard Business School Online

    UC Berkeley School of Information: Berkeley’s I School provides an online experience video library that offers an authentic window into its programs. These videos feature faculty insights and student perspectives, showcasing the rigorous curriculum, collaborative online environment, and even on-campus immersion sessions. By letting prospective students virtually “step inside” the learning experience, Berkeley illustrates transparency in course design and highlights faculty visibility and student interaction in a compelling, real way.

    HEM 2HEM 2

    Source: UC Berkeley School of Information

    Oregon State University Ecampus: OSU Ecampus prioritizes learning experience transparency through its online course demos. The Ecampus “Preview an online course” feature allows prospective students to explore actual course modules and interactive elements before enrolling. From instructor introduction videos to virtual labs and quizzes, these previews give an authentic taste of the online classroom. This strategy demystifies online learning and demonstrates the innovative technology and teaching methods OSU uses to keep students engaged.

    HEM 3HEM 3

    Source: Oregon State University Ecampus

    Athabasca University: Athabasca University’s website prominently features student success stories to build authenticity and trust. These first-hand accounts from graduates of its fully online programs highlight personal achievements and career outcomes. For example, one alumna credits landing a new tech job to the skills gained through her Athabasca degree. By sharing such testimonials (often in the students’ own words), Athabasca underscores the real successes of its learners and the supportive, flexible environment that helped them thrive.

    HEM 4HEM 4

    Source: Athabasca University

    University of Illinois – Gies College of Business: Gies showcases online MBA alumni outcomes as proof of its program’s value. In its news and updates, the college reports impressive career results for graduates of the iMBA program. Surveys of alumni indicate an average 23% salary increase post-degree, and over half of online MBA students earn a promotion or new job offer during their studies. By publicizing these ROI metrics and alumni success stories, Gies effectively communicates the credibility and real-world career impact of its online programs.

    HEM 5HEM 5

    Source: University of Illinois – Gies College of Business

    Penn State World Campus: Penn State’s World Campus highlights numerous online graduate success stories to demonstrate authenticity and outcomes. Its “Success Stories” section shares profiles of adult learners who balanced work, life, and education to earn their degrees online. Graduates speak to how the flexible Penn State online format enabled their career advancement and personal growth (one student noted it allowed her to be “a parent, a great student, and a professional” all at once). These real narratives exemplify the supportive learning community and tangible benefits that World Campus provides.

    HEM 6HEM 6

    Source: Penn State World Campus

    Imperial College Business School: Imperial engages students as content creators through its online student blog. Current students across various programs (including online and part-time degrees) write blog posts about their experiences, challenges, and insights. This first-person content – for instance, students discussing their MBA journey or sharing an “on-campus week” from an online program perspective – adds a highly authentic voice to Imperial’s marketing. By spotlighting student-written stories, the Business School enhances transparency and relatability, letting prospective students hear directly from their peers.

    HEM 7HEM 7

    Source: Imperial College Business School

    University of Edinburgh: The University of Edinburgh provides rich online student experience content for its distance learners. Its official online learning pages and student-driven blogs offer guidance on how online programs work, tools for success, and genuine accounts from students. Prospective learners can find tips from current online students on topics like time management and balancing studies, as well as blog posts detailing personal experiences of adjusting to online learning. By openly sharing these resources and stories, Edinburgh ensures transparency about the online learning journey and fosters authenticity through the voices of its student community.

    HEM 8HEM 8

    Source: University of Edinburgh

    Trust Is Earned, Not Claimed

    Fully online programs succeed when trust is earned, not assumed. The institutions seeing the most sustainable enrollment growth are not the ones making the boldest claims, but the ones providing the clearest answers.

    Authentic, proof-driven content does more than attract attention. It qualifies leads, supports faster decision-making, and builds long-term credibility. Prospective students are no longer persuaded by marketing gloss. They want to see how a program works, who it serves, and what it delivers.

    Authenticity is not about volume or tone. It’s about substance. It means showing real student journeys, revealing how support systems function, and making the learning experience transparent from day one.

    In a digital education market defined by choice and skepticism, trust is your most valuable differentiator. And trust isn’t something you say you have—it’s something you prove, consistently, through the content you publish.

    Do you need tailored and actionable online course marketing ideas to help reenergize your student recruitment efforts?

    Are you looking for education marketing services?

    Our expert digital marketing services can help you attract and enroll more students!

    FAQs

    How do you make online programs feel “real” to prospective students?
    Show how learning actually happens: include LMS walkthroughs, assignment previews, and real student stories. Avoid abstract claims. Clarity and visibility around the experience make it tangible.

    What types of content build trust fastest for fully online courses?
    Proof-focused content: student outcomes, faculty presence, support interactions, and clear course expectations. Specifics convert better than general claims.

    What should an online program testimonial include to feel credible?
    A credible online program testimonial should include the student’s background, challenges faced, how they were supported, and the outcome achieved. It should reference their program and timeline, and ideally end with advice for future students. Specificity and context are key—avoid anonymous or overly polished quotes.

    Source link