Tag: Quality

  • High quality learning means developing and upskilling educators on the pedagogy of AI

    High quality learning means developing and upskilling educators on the pedagogy of AI

    There’s been endless discussion about what students do with generative AI tools, and what constitutes legitimate use of AI in assessment, but as the technology continues to improve there’s a whole conversation to be had about what educators do with AI tools.

    We’re using the term “educators” to encompass both the academics leading modules and programmes and the professionals who support, enable and contribute to learning and teaching and student support.

    Realising the potential of the technologies that an institution invests in to support student success requires educators to be willing and able to deploy it in ways that are appropriate for their context. It requires them to be active and creative users of that technology, not simply following a process or showing compliance with a policy.

    So it was a bit worrying when in the course of exploring what effective preparation for digital learning futures could look like for our Capability for change report last year, it was noticeable how concerned digital and education leaders were about the variable digital capabilities of their staff.

    Where technology meets pedagogy

    Inevitably, when it comes to AI, some HE staff are enthusiastic early adopters and innovators; others are more cautious or less confident – and some are highly critical and/or just want it to go away. Some of this is about personal orientation towards particular technologies – there is a lively and important critical debate about how society comes into a relationship with AI technology and the implications for, well, the future of humanity.

    Some of it is about the realities of the pressures that educators are under, and the lack of available time and headspace to engage with developmental activity. As one education leader put it:

    Sometimes staff, they know that they need to change what they’re doing, but they get caught in the academic cycle. So every year it’s back to teaching again, really, really large groups of students; they haven’t had the time to go and think about how to do things differently.

    But there’s also an institutional strategic challenge here about situating AI within the pedagogic environment – recognising that students will not only be using it habitually in their work and learning, but that they will expect to graduate with a level of competence in it in anticipation of using AI in the workplace. There’s an efficiency question about how using AI can reprofile educator working patterns and workflows. Even if the prospect of “freeing up” lots of time might feel a bit remote right now, educators are clearly going to be using AI in interesting ways to make some of their work a bit more efficient, to surface insight from large datasets that might not otherwise be accessible, or as a co-creator to help enhance their thinking and practice.

    In the context of learning and teaching, educators need to be ready to go beyond asking “how do the tools work and what can I do with them?” and be prepared to ask and answer a larger question: “what does it mean for academic quality and pedagogy when I do?”

    As Tom Chatfield has persuasively argued in his recent white paper on AI and the future of pedagogy, AI needs to have a clear educative purpose when it is deployed in learning and teaching, and should be about actively enhancing pedagogy. Reaching this halcyon state requires educators who are not only competent in the technical use of the tools that are available but prepared to work creatively to embed those tools to achieve particular learning objectives within the wider framework and structures of their academic discipline. Expertise of this nature is not cheaply won – it takes time and resource to think, experiment, test, and refine.

    Educators have the power – and responsibility – to work out how best to harness AI in learning and teaching in their disciplines, but education leaders need to create the right environment for innovation to flourish. As one leader put it:

    How do we create an environment where we’re allowing people to feel like they are the arbiters of their own day to day, that they’ve got more time, that they’re able to do the things that they want to do?…So that’s really an excitement for me. I think there’s real opportunity in digital to enable those things.

    Introducing “Educating the AI generation”

    For our new project “Educating the AI generation” we want to explore how institutions are developing educator AI literacy and practice – what frameworks, interventions, and provisions are helpful and effective, and where the barriers and challenges lie. What sort of environment helps educators to develop not just the capability, but also the motivation and opportunity to become skilled and critical users of AI in learning and teaching? And what does that teach us about how the role of educators might change as the higher education learning environment evolves?

    At the discussion session Rachel co-hosted alongside Kortext advisor Janice Kay at the Festival of Higher Education earlier this month there was a strong sense among attendees that educating the AI generation requires universities to take action on multiple fronts simultaneously if they are to keep up with the pace of change in AI technology.

    Achieving this kind of agility means making space for risk-taking, and moving away from compliance-focused language to a more collaborative and exploratory approach, including with students, who are equally finding their feet with AI. For leaders, that could mean offering both reassurance that this approach is welcomed, and fostering spaces in which it can be deployed.

    In a time of such fast-paced change, staying grounded in concepts of what it means to be a professional educator can help manage the potential sense of threat from AI in learning and teaching. Discussions focused on the “how” of effective use of AI, and the ways it can support student learning and educator practice, are always grounded in core knowledge of pedagogy and education.

    On AI in assessment, it was instructive to hear student participants share a desire to be able to demonstrate learning and skills above and beyond what is captured in traditional assessment, and find different, authentic ways to engage with knowledge. Assessment is always a bit of a flashpoint in pedagogy, especially in constructing students’ understanding of their learning, and there is an open question on how AI technology can support educators in assessment design and execution. More prosaically, the risks to traditional assessment from large language models indicate that staff may need to spend proportionally more of their time on managing assessment going forward.

    Participants drew upon the experiences of the Covid pivot to emergency remote teaching and taking the best lessons from trialling new ways of learning and teaching as a useful reminder that the sector can pivot quickly – and well – when required. Yet the feeling that AI is often something of a “talking point” rather than an “action point” led some to suggest that there may not yet be a sufficiently pressing sense of urgency to kickstart change in practice.

    What is clear about the present moment is that the sector will make the most progress on these questions when there is sharing of thinking and practice and co-development of approaches. Over the next six months we’ll be building up our insight and we’d love to hear your views on what works to support educator development of AI in pedagogy. We’re not expecting any silver bullets, but if you have an example of practice to share, please get in touch.

    This article is published in association with Kortext. Join Debbie, Rachel and a host of other speakers at Kortext LIVE on Wednesday 11 February in London, where we’ll be discussing some of our findings – find out more and book your place here.

    Source link

  • Measuring What Matters: A Faculty Development System That Improves Teaching Quality – Faculty Focus

    Measuring What Matters: A Faculty Development System That Improves Teaching Quality – Faculty Focus

    Source link

  • Measuring What Matters: A Faculty Development System That Improves Teaching Quality – Faculty Focus

    Measuring What Matters: A Faculty Development System That Improves Teaching Quality – Faculty Focus

    Source link

  • Why busy educators need AI with guardrails

    Why busy educators need AI with guardrails

    Key points:

    In the growing conversation around AI in education, speed and efficiency often take center stage, but that focus can tempt busy educators to use what’s fast rather than what’s best. To truly serve teachers–and above all, students–AI must be built with intention and clear constraints that prioritize instructional quality, ensuring efficiency never comes at the expense of what learners need most.

    AI doesn’t inherently understand fairness, instructional nuance, or educational standards. It mirrors its training and guidance, usually as a capable generalist rather than a specialist. Without deliberate design, AI can produce content that’s misaligned or confusing. In education, fairness means an assessment measures only the intended skill and does so comparably for students from different backgrounds, languages, and abilities–without hidden barriers unrelated to what’s being assessed. Effective AI systems in schools need embedded controls to avoid construct‑irrelevant content: elements that distract from what’s actually being measured.

    For example, a math question shouldn’t hinge on dense prose, niche sports knowledge, or culturally-specific idioms unless those are part of the goal; visuals shouldn’t rely on low-contrast colors that are hard to see; audio shouldn’t assume a single accent; and timing shouldn’t penalize students if speed isn’t the construct.

    To improve fairness and accuracy in assessments:

    • Avoid construct-irrelevant content: Ensure test questions focus only on the skills and knowledge being assessed.
    • Use AI tools with built-in fairness controls: Generic AI models may not inherently understand fairness; choose tools designed specifically for educational contexts.
    • Train AI on expert-authored content: AI is only as fair and accurate as the data and expertise it’s trained on. Use models built with input from experienced educators and psychometricians.

    These subtleties matter. General-purpose AI tools, left untuned, often miss them.

    The risk of relying on convenience

    Educators face immense time pressures. It’s tempting to use AI to quickly generate assessments or learning materials. But speed can obscure deeper issues. A question might look fine on the surface but fail to meet cognitive complexity standards or align with curriculum goals. These aren’t always easy problems to spot, but they can impact student learning.

    To choose the right AI tools:

    • Select domain-specific AI over general models: Tools tailored for education are more likely to produce pedagogically-sound and standards-aligned content that empowers students to succeed. In a 2024 University of Pennsylvania study, students using a customized AI tutor scored 127 percent higher on practice problems than those without.
    • Be cautious with out-of-the-box AI: Without expertise, educators may struggle to critique or validate AI-generated content, risking poor-quality assessments.
    • Understand the limitations of general AI: While capable of generating content, general models may lack depth in educational theory and assessment design.

    General AI tools can get you 60 percent of the way there. But that last 40 percent is the part that ensures quality, fairness, and educational value. This requires expertise to get right. That’s where structured, guided AI becomes essential.

    Building AI that thinks like an educator

    Developing AI for education requires close collaboration with psychometricians and subject matter experts to shape how the system behaves. This helps ensure it produces content that’s not just technically correct, but pedagogically sound.

    To ensure quality in AI-generated content:

    • Involve experts in the development process: Psychometricians and educators should review AI outputs to ensure alignment with learning goals and standards.
    • Use manual review cycles: Unlike benchmark-driven models, educational AI requires human evaluation to validate quality and relevance.
    • Focus on cognitive complexity: Design assessments with varied difficulty levels and ensure they measure intended constructs.

    This process is iterative and manual. It’s grounded in real-world educational standards, not just benchmark scores.

    Personalization needs structure

    AI’s ability to personalize learning is promising. But without structure, personalization can lead students off track. AI might guide learners toward content that’s irrelevant or misaligned with their goals. That’s why personalization must be paired with oversight and intentional design.

    To harness personalization responsibly:

    • Let experts set goals and guardrails: Define standards, scope and sequence, and success criteria; AI adapts within those boundaries.
    • Use AI for diagnostics and drafting, not decisions: Have it flag gaps, suggest resources, and generate practice, while educators curate and approve.
    • Preserve curricular coherence: Keep prerequisites, spacing, and transfer in view so learners don’t drift into content that’s engaging but misaligned.
    • Support educator literacy in AI: Professional development is key to helping teachers use AI effectively and responsibly.

    It’s not enough to adapt–the adaptation must be meaningful and educationally coherent.

    AI can accelerate content creation and internal workflows. But speed alone isn’t a virtue. Without scrutiny, fast outputs can compromise quality.

    To maintain efficiency and innovation:

    • Use AI to streamline internal processes: Beyond student-facing tools, AI can help educators and institutions build resources faster and more efficiently.
    • Maintain high standards despite automation: Even as AI accelerates content creation, human oversight is essential to uphold educational quality.

    Responsible use of AI requires processes that ensure every AI-generated item is part of a system designed to uphold educational integrity.

    An effective approach to AI in education is driven by concern–not fear, but responsibility. Educators are doing their best under challenging conditions, and the goal should be building AI tools that support their work.

    When frameworks and safeguards are built-in, what reaches students is more likely to be accurate, fair, and aligned with learning goals.

    In education, trust is foundational. And trust in AI starts with thoughtful design, expert oversight, and a deep respect for the work educators do every day.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • Neurodiverse leadership is a quality issue for universities, not a side project 

    Neurodiverse leadership is a quality issue for universities, not a side project 

    Author:
    Imran Mir

    Published:

    This guest blog was kindly authored by Imran Mir, Campus Head and Programme Lead, Apex College Leicester 

    Leadership in higher education is often measured by indicators such as retention rates, research outputs and league table positions. These are important, but leadership is far deeper than numbers. Growing up with autism and then becoming a leader in higher education has shaped how I approach leadership. Being neurodiverse means I see situations differently, notice patterns others may miss, and feel deep empathy with students and colleagues who are often invisible in our systems. 
     
    This is why neurodiverse leadership must be treated as a quality issue. Universities are rightly talking more about inclusive curriculum design and student support, but these conversations rarely extend to who sits at the decision-making table. Representation in leadership is not about tokenism. It is about ensuring the sector benefits from different ways of thinking, which is vital for quality, resilience and innovation.

    Why neurodiverse leadership matters

    According to the University of Edinburgh 2024, in the UK, one in seven people are neurodiverse. Advance HE 2024 report shows leadership teams in higher education remain overwhelmingly homogenous. This lack of representation is not just an issue of fairness, it is also a missed opportunity for innovation. Research by Deloitte 2017 shows that neurodiverse teams can be up to 30 per cent more productive in tasks requiring creativity and pattern recognition. Universities are currently facing challenges in relation to funding and digital disruption, and they will need this kind of productivity and resilience more than ever. 
     
    Further, Made By Dyslexia 2023 claims that one in five people are dyslexic, many of whom bring excellent problem-solving and communication skills. These strengths align with what is expected in leadership roles, where complex challenges and clear communication are requirements. Yet recruitment and promotion processes can often filter out people who think or communicate differently. 
     
    Austin & Pisano, 2017 adds that neurodiverse leaders frequently demonstrate empathy and adaptability. These qualities are imperative in higher education as institutions are trying their best to meet diverse student needs, respond to rapid change and rebuild trust in their systems. Without neurodiverse leadership, universities risk reinforcing the very barriers which they are trying to eradicate. 

    Lessons for higher education leaders

    From my own experience, I have learned three lessons that apply directly to leadership in higher education. 
     
    The first lesson is the power of clarity. Neurodiverse staff and students excel when expectations are clear. As a leader, I have seen first-hand that communicating with clarity in strategy documents, policies and day-to-day interactions builds trust in the academic institution. Research on organisational effectiveness suggests that clear communication consistently improves outcomes across diverse teams  
     
    The second lesson is valuing flexibility. Traditional recruitment, professional development and promotion systems seem to reward conformity. This is a missed opportunity because neurodiverse teams will bring innovation and productivity benefits. Strong leaders can change this by adopting flexible approaches such as task-based interviews, blended assessments that combine written, oral and practical elements, and CPD which takes into consideration various communication styles. 

    The third lesson is role modelling openness. For years I believed that revealing my autism would be seen as a weakness. In reality, sharing my story has made me a stronger leader. It has encouraged colleagues to be open about their own experiences and helped students feel less isolated. Austin & Pisano 2017 show that when leaders model vulnerability and authenticity, it strengthens organisational culture and increases trust across teams. 

    A quality issue, not a side project

    These lessons outline why neurodiverse leadership should not be viewed as a side project. Quality frameworks such as the Office for Students’ conditions and the QAA Quality Code are built on assumptions of fairness, reliability and inclusivity. If leadership itself is not inclusive, then the credibility of these frameworks is undermined. If the voices of the one-in-seven neurodiverse people are not present in leadership, then universities are failing to reflect the diversity of the communities they are trying to serve.  
     
    Neurodiverse leadership will strengthen governance, enhances decision-making and ensures policies reflect the diversity of the student body. It is a direct contributor to educational quality, not an optional extra.

    Conclusion

    As someone working in higher education, I know these lessons are transferable across the sector. But they feel especially urgent now, as universities face funding pressures, digital disruption and growing student expectations. In such times, leaders who think differently are not optional. They are essential. 
     
    Neurodiverse leadership is not about meeting quotas. It is about strengthening quality. The sector cannot afford to waste talent or exclude perspectives that could help it adapt and thrive. If universities want to remain resilient, they must recognise that diversity of thought at the leadership table is just as important as diversity in the classroom. At its heart, this is about shaping the future of higher education in a way that is inclusive, innovative and sustainable. 

    Source link

  • Survey: Undergraduates on Academic Quality

    Survey: Undergraduates on Academic Quality

    Eight in 10 students rate the quality of education they’re getting as good or excellent, according to the first round of results from Inside Higher Ed’s main annual Student Voice survey of more than 5,000 two- and four-year undergraduates with Generation Lab. That’s up from closer to seven in 10 students in last year’s main Student Voice survey, results that are affirming for higher education at a turbulent economic, technological and political moment.

    Still, students point to room for improvement when it comes to their classroom experience—and flag outside issues that are impacting their academic success. Case in point: 42 percent of all students, and 50 percent of first-generation students, cite financial constraints as a top barrier to their success. This can include tuition but also living and other indirect expenses. Balancing outside work with coursework and mental health issues are other commonly cited challenges. Taken as a whole, the findings underscore the need for comprehensive wraparound supports and a focus on high-touch approaches in an ever more high-tech world.

    About the Survey

    Student Voice is an ongoing survey and reporting series that seeks to elevate the student perspective in institutional student success efforts and in broader conversations about college.

    Look out for future reporting on the main annual survey of our 2025–26 Student cycle, Student Voice: Amplified. Future reports will cover cost of attendance, health and wellness, college involvement, career readiness, and the relationship of all those to students’ sense of success. And check out what students have already said about trust—including its relationship to affordability—and about how artificial intelligence is reshaping the college experience.

    Some 5,065 students from 260 two- and four-year institutions, public and private nonprofit, responded to this main annual survey about student success, conducted in August. Explore the data from the academic life portion of the survey, captured by our survey partner Generation Lab, here. The margin of error is plus or minus 1 percentage point.

    Here’s more on what respondents to our main annual Student Voice survey had to say about academic success.

    1. Students across institution types rate their educational experience highly.

    Some 80 percent of students rate the quality of their college education thus far as good (50 percent) or excellent (30 percent), compared to last year’s 73 percent of students who rated it good (46 percent) or excellent (27 percent). This is relatively consistent across student characteristics and institution types—though, like last year, private nonprofit institutions have a slight edge over public ones, especially in terms of perceived excellence: In 2025, 47 percent of private nonprofit students rate their education excellent versus 27 percent of public institution students. This can’t be explained by two-year institutions being included in the public category, as community college students are slightly more likely than four-year students to describe their education as excellent (32 percent versus 29 percent, respectively). On community college excellence, one recent analysis by the Burning Glass Institute found that two-year institutions have dramatically improved their completion rates in recent years due in part to a concerted student success effort.

    What about four-year college excellence? The Student Voice survey didn’t define quality specifically, but existing data (including prior Student Voice data) shows that students value connections with faculty. And with private nonprofit institutions having lower average faculty-to-student ratios than publics, one possible explanation is that students at private nonprofits may have extra opportunities to connect with their professors. But as other recent analyses demonstrate, private nonprofit institutions, even highly selective ones, do not have a monopoly on delivering life-changing educational experiences for students. Nearly 500 institutions—including community colleges, public universities, religious colleges and specialized colleges—this year achieved a new “opportunity” designation from the Carnegie Classification of Institutions of Higher Education, for example, signifying both high levels of access and strong economic outcomes for students.

    2. Students want fewer high-stakes exams and more relevant course content, indicating this would boost their academic success.

    Like last year’s survey, the top classroom-based action that students say would boost their academic success is faculty members limiting high-stakes exams, such as those counting for 40 percent or more of a course grade: 45 percent of students say this would help. Also like last year, the No. 2 action from a longer list of options is professors better connecting what they teach in class to issues outside of class and/or students’ career interests (40 percent). In Inside Higher Ed’s 2025 Survey of College and University Chief Academic Officers, just 20 percent of provosts said their institution has encouraged faulty members to limit high-stakes exams. But artificial intelligence is forcing a broader campus-assessment reckoning—and how to engage students and authentically assess their learning are questions central to those ongoing conversations. Relatedly, 10 percent of Student Voice respondents say promoting AI literacy would most boost their academic success.

    3. Most students know how and when to use AI for coursework, but there are knowledge gaps between groups.

    Upward of eight in 10 students indicate they know how, when and whether to use generative AI for help with coursework. In 2024’s survey, the plurality of students said this was because their professors had addressed the issue in class. This year, the plurality (41 percent) attributes this knowledge to professors including policies in their syllabi (up from 29 percent last year).

    Like last year, relatively few students credit a college- or universitywide policy or other information or training from the broader institution. Across higher education, many institutions have held off on adopting broad AI use policies, instead deferring to faculty autonomy and expertise: Just 14 percent of provosts in Inside Higher Ed’s survey said their institution has adopted comprehensive AI governance policies and/or an AI strategy—though more said it has adopted specific policies for academic integrity, teaching and/or research (45 percent).

    While classroom-based approaches are clearly evolving, two-year Student Voice respondents report being unclear on how, when and whether to use AI for coursework at double the rate of four-year peers (20 percent versus 10 percent). Perhaps relatedly, community college provosts were most likely to report significant faculty resistance to AI on their campuses, by institution type, at 49 percent versus 38 percent over all. Another difference: 23 percent of adult learners (25 and older) report being unclear, compared to just 10 percent of 18- to 24-year-olds. Both of these gaps merit further research.

    4. Students say their institution’s course delivery methods and scheduling fit their needs—with some caveats.

    Asked to what extent their institution offers course delivery methods/modalities that meet their learning needs and schedules, about four in 10 students each say very well or somewhat well. Adult learners (50 percent), community college students (49 percent) and students working 30 or more hours per week (45 percent) are especially likely to say their college is meeting their needs here very well—evidence that many nontraditional learners are finding the flexibility they need to balance college with busy lives.

    However, students who say they’ve seriously considered stopping out of college at some point are especially unlikely to say their college is serving them very well here (33 percent). Risk factors for stopping out are varied and complex. But this may be one more reason for institutions to prioritize flexible course options. On the other hand, 48 percent of students who have stopped out for a semester or more but then re-enrolled say they’re being very well served by their current institution in this way.

    5. Students’ biggest reported barriers to academic success aren’t academic.

    From a long list of possible challenges, students are most likely to say that financial constraints (such as tuition and living expenses), needing to work while attending college, and mental health issues are impeding their academic success. None of these is explicitly academic, underscoring the need for holistic supports in student success efforts. Adult learners (51 percent), students working 30 hours or more per week (52 percent), first-generation students (50 percent) and students who have previously stopped out of college (55 percent) all report financial constraints at elevated rates. Racial differences emerge, as well: Black (46 percent) and Hispanic (49 percent) students are more likely to flag financial constraints as a barrier to academic success than their white (38 percent) and Asian American and Pacific Islander (37 percent) peers.

    On mental health, women (37 percent) and nonbinary students (64 percent, n=209) flag this as a barrier at higher rates than men (26 percent). Same for students who have seriously considered stopping out of college relative to those who have not: 41 percent versus 30 percent, respectively.

    Some of these issues are interconnected, as well: Other research has found a relationship between basic needs insecurity and mental health challenges that is pronounced among specific student populations, including first-generation and LGBTQIA+ students. Another recent study by the National College Attainment Network found that a majority of two- and four-year colleges cost more than the average student can pay, sometimes by as much as $8,000 a year. And prior Student Voice surveys have found that students link affordability to both their academic performance and to trust in higher education.

    6. Colleges are meeting students’ expectations for responding to changing needs and circumstances—with some exceptions.

    With so many different factors influencing students’ academic success, how are colleges doing when it comes to responding to students’ needs and changing circumstances, such as with deadline extensions, crisis support and work or family accommodations? Seven in 10 students say their college or university is meeting (57 percent) or exceeding (12 percent) their expectations. Most of the remainder say their institution is falling slightly short of expectations. This is relatively consistent across student groups and institution types—though students who have seriously considered stopping out of college are more likely than those who haven’t to say their institution is falling at least slightly short of their expectations (33 percent versus 19 percent, respectively). This again underscores the importance of comprehensive student support systems.

    The Connection Factor

    While it’s clear that AI and other outside variables are reshaping the academic experience, one mitigating influence may be human connection.

    Jack Baretz, a senior studying math and data science at the University of North Dakota, is currently working with peers to develop an AI-powered tool called Kned that can answer students’ and advisers’ basic academic advising questions (think course sequencing, availability and prerequisites). The idea isn’t to replace advisers but rather counteract high adviser caseloads and turnover and—most importantly—maximize students’ time with their adviser so it’s a meaningful interaction.

    “There’s a lot of anxiety kids have at this point in their life, where it’s like, ‘I don’t know what I’m going to do next. What would be a good major to make sure I get a job? I don’t want to be jobless.’ Just those conversations—I think that’s where advisers are most effective and probably most content, helping people,” Baretz said.

    Three light-skinned young men, two wearing T-shirts and one in a hooded sweatshirt

    From left: University of North Dakota students and advising chatbot collaborators Michael Gross, Owen Reilly and Jack Baretz.

    Zoom

    A prior Student Voice survey found that nearly half of students lack key academic guidance. In this year’s survey, 19 percent of students say channeling more resources to academic advising so they can get more help from their adviser would most boost their academic success. Some 28 percent say the same of new and/or clearer program maps and pathways.

    This ethos extends to what Kned collaborator Michael Gross, a junior majoring in finance, said keeps him academically engaged: connection. His most motivating online classes, for example, have had breakout rooms for peer-to-peer discussions. Why? “When you have more than one person working on something, you’re way more likely to contribute and do your best work on it, because there’s other people’s grades at stake, too,” he said. “It’s not just yours.”

    Gross added, “One thing I would say is for institutions to encourage discussion on college campuses. The main thing that we’re kind of losing, especially with all this technology, is people are becoming so separated from each other. College is meant to be a place where you can engage your social skills and just learn about other people—because this is one of the last times you can be surrounded by so many people your age, and so many people from different walks of life with so many different ideas, too.”

    To this point, 19 percent of Student Voice respondents cite social isolation or lack of belonging as a top barrier to their academic success. Tyton Partners’ 2025 “Time for Class” report also found a jump in both instructor and student preference for face-to-classes, “showing renewed demand for classroom connection.” In the same report, nearly half of instructors cited academic anxiety as a top concern among students, and students themselves reported low motivation and weak study habits as persistent barriers to learning.

    Terry McGlynn, professor of biology at California State University, Dominguez Hills, and author The Chicago Guide to College Science Teaching, agreed that “learning is inherently a social endeavor.” And educators have for the past five years noticed “it’s a lot harder to get students to interact with one another and to show some vulnerability when experiencing intellectual growth.”

    Many have attributed this to the effects of the pandemic, McGlynn said. But if higher education is now “heading into this era of AI in the classroom without reintegrating quality social interactions, I’m worried for us.”

    He added, “I hope we develop approaches that bring people together rather than providing expectations that we work in isolation from one another.”

    This independent editorial project is produced with the Generation Lab and supported by the Gates Foundation.

    Source link

  • High quality recruitment practices is everyone’s responsibility

    High quality recruitment practices is everyone’s responsibility

    The UK’s international higher education sector is at yet another crossroads.

    The positioning of international students as not only economic contributors to universities, but also cultural and intellectual assets to our campuses and communities is a well-told tale. But with ever-increasing government scrutiny of international recruitment practice, it is essential that the sector can unequivocally demonstrate that it operates with integrity and transparency.

    It is not just the government institutions must convince of the UK’s commitment to high quality opportunities, but students themselves to ensure the UK remains a destination of choice.

    Last month, IDP published its global commitment to quality and, as part of this, announced we are fully compliant with the British Council’s Agent Quality Framework (AQF). I imagine some might read that and ask “so what? Were you not already working in a compliant way already?”

    To be clear, we were (and always have been) committed to being ethical and responsible in our approach to recruitment, and it is what our partners know and trust us for. But our public commitment to the AQF in January 2024 and more latterly basic compliance assessment (BCA) requirements changes inspired us to have a wholesale review of our processes to ensure all our processes and practices drive quality. Transparency matters more now than ever – the more reassurance we can give our partners that we take our role in their student recruitment seriously sends the right signal to the government that we are committed to sustainable growth focused on right metrics.

    We are in this for the right reasons, that is, the right students, with the right standards and intentions, going to the right universities to complete their studies while living and thriving in our towns and cities. But it’s our hope that by being public about our official compliance, we can encourage others to do the same.

    The fact it has taken us, a well-established world-leading recruitment partner, months to feel confident the checks and balances are in place and that we have full adherence to the framework, demonstrates the complexity behind compliance. As we go along, we’ll no doubt learn more about how we can improve and strengthen those assurances to our partners (and therefore to the government) that international education is not full of ‘bad actors’.

    This is about more than compliance with external standards. It is a need for the international education community to be loud and proud about our work at a time when quality assurance in recruitment is under a brighter spotlight than ever.

    Regulation, regulation, regulation

    The UK government has made clear that international student recruitment cannot be divorced from broader debates around immigration, compliance and the sustainability of the sector. Parliamentary inquiries. Home Office interventions. The MAC review. The Immigration White Paper. The Home Office English Language Test. Freedom of Information requests. Intensified media focus. All this has raised questions about whether recruitment practice is always consistent with the standards expected of a world-leading education system. And this isn’t just about immigration rhetoric – this is about how those practices impact students and the enormous financial and emotional investment they make in choosing the UK for higher education, and make them feel their investment is worth it.

    In this environment, questions may be asked as to whether self-regulation is sufficient. The AQF, developed by the British Council in partnership with BUILA, UKCISA and Universities UK International, provides the only recognised, sector-wide framework for professionalism, ethical practice, and student-centred advice. To ignore or sidestep it is to invite greater external regulation and risk undermining already-precarious confidence in the sector.

    International students deserve more than transactional recruitment processes; they deserve ethical, transparent, and student-first guidance that empowers them to make the right choices for their future. Likewise, the UK needs to demonstrate to policymakers that the sector is capable of regulating itself to the highest standard.

    Quality is a shared responsibility

    The AQF sets out clear principles in five areas; organisational behaviour, ethical business practice, objective advice and guidance, student-centred practice and organisational competence

    Compliance across all these standards is not the endpoint. Instead, it is a baseline for our work. Compliance establishes credibility, but the leadership requires continuous improvement and a proactive commitment to go beyond minimum requirements.

    The onus is now on all organisations involved in international student recruitment – universities, agents, sub-agents, aggregators and service providers – to align with the AQF and evidence their compliance. AQF compliance is a collective responsibility. The question is no longer whether institutions and agents should adopt the AQF, but instead how quickly they can demonstrate alignment and ensure that these standards are consistently embedded in practice. Anything less risks weakening trust in the UK’s international education offer.

    The message to the sector is clear – quality must take precedence over volume until we are confident we’re in a position to grow sustainably and deliver on student expectations. Only by embedding AQF standards across all recruitment channels can the UK demonstrate to government, students and the wider international community that it is serious about maintaining excellence.

    The UK has an opportunity to lead globally on quality standards. Let’s do it together.

    Source link

  • Getting English Language Assessment Right: The key to sustained quality in UK higher education

    Getting English Language Assessment Right: The key to sustained quality in UK higher education

    Author:
    Pamela Baxter

    Published:

    This HEPI guest blog was kindly authored by Pamela Baxter, Chief Product Officer (English) at Cambridge University Press & Assessment. Cambridge University Press & Assessment are a partner of HEPI.

    UK higher education stands at a crossroads: one of our greatest exports is at risk. Financial pressures are growing. International competition for students is more intense than ever. As mentioned in Cambridge’s written evidence to the Education Select Committee’s Higher Education and Funding: Threat of Insolvency and International Students inquiry, one of the crucial levers for both quality and stability is how we assess the English language proficiency of incoming international students. This will not only shape university finances and outcomes but will have serious implications for the UK’s global reputation for educational excellence.

    The regional and national stakes

    The APPG for International Students’ recent report, The UK’s Global Edge, Regional Impact and the Future of International Students, makes clear that the flow of international students is not only a localised phenomenon. Their presence sustains local economies and drives job creation in regions across the UK. They help deliver on the Government’s wider ambitions for creating opportunities for all by bringing investment and global connectivity to towns and cities. Their impact also stretches to the UK’s position on the world stage, as recruitment and academic exchange reinforce our soft power and bolster innovation.

    International students bring nearly £42 billion to the UK economy each year, the equivalent of every citizen being around £560 better off. International talent is embedded in key sectors of life across the nations, with almost one in five NHS staff coming from outside the UK and more than a third of the fastest-growing UK start-ups founded or co-founded by immigrants. As HEPI’s most recent soft power index showed, 58 serving world leaders received higher education in the UK.

    The value of higher education is rising

    According to the OECD’s Education at a Glance 2025 report – recently launched in the UK  in collaboration with HEPI and Cambridge University Press & Assessment – higher education is delivering greater benefits than ever. Nearly half of young adults in OECD countries now complete tertiary education. The returns for individuals and societies in terms of employment, earnings and civic participation are substantial. But when attainment in higher education is so valuable, deficiencies in the preparation of students – including inadequate English language skills – can have considerable costs.

    Why robust testing matters

    Robust English language testing is, therefore, fundamental. It ensures that international students can fully participate in academic life and succeed in their chosen courses. It also protects universities from the costs that arise when students are underprepared.

    The evidence is clear that not all tests provide the same level of assurance. Regulated secure English language tests such as IELTS have demonstrated reliability and validity over decades. By contrast, newer and under-regulated at-home tests have been linked to weaker student outcomes. A recent peer-reviewed study in the ELT Journal found that students admitted on the basis of such tests often struggled with the academic and communicative demands of their courses.

    The HOELT moment

    The proposed introduction of a Home Office English Language Test (HOELT) raises the stakes still further. The Home Office has indicated an interest in at-home invigilation. While innovation of this kind may appear to offer greater convenience, it also risks undermining quality, fairness and security. The HOELT process must be grounded in evidence, setting high minimum standards and ensuring robust protections against misuse. High-stakes decisions such as the creation of HOELT should not be driven by cost or convenience alone. They should be driven, instead, by whether the system enables talented students to succeed in the UK’s competitive academic environment, while safeguarding the country’s immigration processes.

    Conclusion: Sustaining and supporting international student success

    International students enhance the UK’s educational landscape, bolster the UK’s global reputation and contribute to long-term growth and prosperity. But the benefits they bring are not guaranteed. Without trusted systems for English language assessment, we risk undermining the very conditions that allow them to thrive and contribute meaningfully.

    As the Government pursues the creation of its own HOELT, it has a unique opportunity to ensure policy is evidence-led and quality-driven. Doing so will not only safeguard students and UK universities but will also reinforce the UK’s standing as a world leader in higher education.

    Your chance to engage: Join Cambridge University Press & Assessment and HEPI at Labour Party Conference 2025

    These and other issues will be explored in greater detail at Cambridge University Press & Assessment’s forthcoming event in partnership with HEPI at the Labour Party Conference 2025, where policymakers and sector leaders will come together to consider how to secure and strengthen UK higher education on a global stage.

    Source link

  • Podcast: Quality reforms, duty of candour, skills

    Podcast: Quality reforms, duty of candour, skills

    This week on the podcast we examine the Office for Students’ proposed overhaul of England’s quality system, as radical reforms seek to integrate the Teaching Excellence Framework with minimum standards and give TEF some serious teeth.

    Plus we discuss the government’s long-awaited “Hillsborough law” as the Public Office (Accountability) Bill imposes new duties of candour on universities, and examine the machinery of government changes that have seen apprenticeships policy and Skills England transferred from the Department for Education to Pat McFadden’s expanded Department for Work and Pensions.

    With Andrea Turley, Partner at KPMG, Shane Chowen, Editor at FE Week, Debbie McVitty, Editor at Wonkhe and presented by Jim Dickinson, Associate Editor at Wonkhe.

    TEF6: the incredible machine takes over quality assurance regulation

    Reputation versus sunlight – universities and the new duty of candour

    What Ofsted inspections reveal about university leadership and culture

    A machinery of government muddle over skills

    The former student leaders entering Parliament

    You can subscribe to the podcast on Acast, Amazon Music, Apple Podcasts, Spotify, Deezer, RadioPublic, Podchaser, Castbox, Player FM, Stitcher, TuneIn, Luminary or via your favourite app with the RSS feed.

    Source link

  • TEF6: the incredible machine takes over quality assurance regulation

    TEF6: the incredible machine takes over quality assurance regulation

    If you loved the Teaching Excellence Framework, were thrilled by the outcomes (B3) thresholds, lost your mind for the Equality of Opportunity Risk Register, and delighted to the sporadic risk-based OfS investigations based on years-old data you’ll find a lot to love in the latest set of Office for Students proposals on quality assurance.

    In today’s Consultation on the future approach to quality regulation you’ll find a cyclical, cohort based TEF that also includes a measurement (against benchmarks) of compliance with the thresholds for student outcomes inscribed in the B3 condition. Based on the outcomes of this super-TEF and prioritised based on assessment of risk, OfS will make interventions (including controls on recruitment and the conditions of degree awarding powers) and targeted investigation. This is a first stage consultation only, stage two will come in August 2026.

    It’s not quite a grand unified theory: we don’t mix in the rest of the B conditions (covering less pressing matters like academic standards, the academic experience, student support, assessment) because, in the words of OfS:

    Such an approach would be likely to involve visits to all providers, to assess whether they meet all the relevant B conditions of registration

    The students who are struggling right now with the impacts of higher student/staff ratios and a lack of capacity due to over-recruitment will greatly appreciate this reduction in administrative burden.

    Where we left things

    When we last considered TEF we were expecting an exercise every four years, drawing on provider narrative submissions (which included a chunk on a provider’s own definition and measurement of educational gain), students’ union narrative submissions, and data on outcomes and student satisfactions. Providers were awarded a “medal” for each of student outcomes and student experience – a matrix determined whether this resulted in an overall Bronze, Silver, Gold or Requires Improvement.

    The first three of these awards were deemed to be above minimum standards (with slight differences between each), while the latter was a portal to the much more punitive world of regulation under group B (student experience) conditions of registration. Most of the good bits of this approach came from the genuinely superb Pearce Review of TEF conducted under section 26 of the Higher Education and Research Act, which fixed a lot of the statistical and process nonsense that had crept in under previous iterations and then-current plans (though not every recommendation was implemented).

    TEF awards were last made in 2023, with the next iteration – involving all registered providers plus anyone else who wanted to play along – was due in 2027.

    Perma-TEF

    A return to a rolling TEF rather than a quadrennial quality enhancement jamboree means a pool of TEF assessors rather than a one-off panel. There will be steps taken to ensure that an appropriate group of academic and student assessors is selected to assess each cohort – there will be special efforts made to use those with experience of smaller, specialist, and college-based providers – and a tenure of two-to-three years is planned. OfS is also considering whether its staff can be included among the storied ranks of those empowered to facilitate ratings decisions.

    Likewise, we’ll need a more established appeals system. Open only to those with Bronze or Needs Improvement ratings (Gold and Silver are passing grades) it would be a way to potentially forestall engagement and investigations based on an active risk to student experience or outcomes, or a risk of a future breach of a condition of registration for Bronze or Requires Improvement.

    Each provider would be assessed once every three years – all providers taking part in the first cycle would be assessed in either 2027-28, 2028-29, or 2029-30 (which covers only undergraduate students because there’s no postgraduate NSS yet – OfS plan to develop one before 2030). In many cases they’ll only know which one at the start of the academic year in question, which will give them six months to get their submissions sorted.

    Because Bronze is now bad (rather than “good but not great” as it used to be) the first year’s could well include all providers with a 2023 Bronze (or Requires Improvement) rating, plus some with increased risks of non-compliance, some with Bronze in one of the TEF aspects, and some without a rating.

    After this, how often you are assessed depends on your rating – if you are Gold overall it is five years till the next try, Silver means four years, and Bronze three (if you are “Requires Improvement” you probably have other concerns beyond the date of your next assessment) but this can be tweaked if OfS decides there is an increased risk to quality or for any other reason.

    Snakes and ladders

    Ignore the gradations and matrices in the Pearce Review – the plan now is that your lowest TEF aspect rating (remember you got sub-awards last time for student experience and student outcomes) will be your overall rating. So Silver for experience and Bronze for outcomes makes for an overall Bronze. As OfS has decided that you now have to pay (likely around £25,000) to enter what is a compulsory exercise this is a cost that could lead to a larger cost in future.

    In previous TEFs, the only negative consequence for those outside of the top ratings have been reputational – a loss of bragging rights of, arguably, negligible value. The new proposals align Bronze with the (B3) minimum required standards and put Requires Improvement below these: in the new calculus of value the minimum is not good enough and there will be consequences.

    We’ve already had some hints that a link to fee cap levels is back on the cards, but in the meantime OfS is pondering a cap on student numbers expansion to punish those who turn out Bronze or Requires Improvement. The workings of the expansion cap will be familiar to those who recall the old additional student numbers process – increases of more than five per cent (the old tolerance band, which is still a lot) would not be permitted for poorly rated providers.

    For providers without degree awarding powers it is unlikely they will be successful in applying for them with Bronze and below – but OfS is also thinking about restricting aspects of existing providers DAPs, for example limiting their ability to subcontract or franchise provision in future. This is another de facto numbers cap in many cases, and is all ahead of a future consultation on DAPs that could make for an even closer link with TEF.

    Proposals for progression

    Proposal 6 will simplify the existing B3 thresholds, and integrate the way they are assessed into the TEF process. In a nutshell, the progression requirement for B3 would disappear – with the assessment made purely on continuation and completion, with providers able to submit contextual and historic information to explain why performance is not above the benchmark or threshold as a part of the TEF process.

    Progression will still be considered at the higher levels of TEF, and here contextual information can play more of a part – with what I propose we start calling the Norland Clause allowing providers to submit details of courses that lead to jobs that ONS does not consider as professional or managerial. That existing indicator will be joined by another based on (Graduate Outcomes) graduate reflections on how they are using what they have learned, and benchmarked salaries three years after graduation from DfE’s Longitudinal Educational Outcomes (LEO) data – in deference to that random Kemi Badenoch IFS commission at the tail end of the last parliament.

    Again, there will be contextual benchmarks for these measures (and hopefully some hefty caveating on the use of LEO median salaries) – and, as is the pattern in this consultation, there are detailed proposals to follow.

    Marginal gains, marginal losses

    The “educational gains” experiment, pioneered in the last TEF, is over: making this three times that a regulator in England has tried and failed to include a measure of learning gain in some form of regulation. OfS is still happy for you to mention your education gain work in your next narrative submission, but it isn’t compulsory. The reason: reducing burden, and a focus on comparability rather than a diversity of bespoke measures.

    Asking providers what something means in their context, rather than applying a one-size-fits-all measure of student success was an immensely powerful component of the last exercise. Providers who started on that journey at considerable expense in data gathering and analysis may be less than pleased at this latest development – and we’d certainly understood that DfE were fans of the approach too.

    Similarly, the requirement for students to feed back on students in their submissions to TEF has been removed. The ostensible reason is that students found it difficult last time round – the result is that insight from the valuable networks between existing students and their recently graduated peers is lost. The outcomes end of TEF is now very much data driven with only the chance to explain unusual results offered. It’s a retreat from some of the contextual sense that crept in with the Pearce Review.

    Business as usual

    Even though TEF now feels like it is everywhere and for always, there’s still a place for OfS’ regular risk-based monitoring – and annex I (yes, there’s that many annexes) contains a useful draft monitoring tool.

    Here it is very good to see staff:student ratios, falling entry requirements, a large growth in foundation year provision, and a rapid growth in numbers among what are noted as indicators of risk to the student experience. It is possible to examine an excellent system designed outside of the seemingly inviolate framework of the TEF where events like this would trigger an investigation of provider governance and quality assurance processes.

    Alas, the main use of this monitoring is to decide whether or not to bring a TEF assessment forward, something that punts an immediate risk to students into something that will be dealt with retrospectively. If I’m a student on a first year that has ballooned from 300 to 900 from one cycle to the next there is a lot of good a regulator can do by acting quickly – I am unlikely to care whether a Bronze or Silver award is made in a couple of years’ time.

    International principles

    One of the key recommendations of the Behan review on quality was a drawing together of the various disparate (and, yes, burdensome) streams of quality and standards assurance and enhancement into a unified whole. We obviously don’t quite get there – but there has been progress made towards another key sector bugbear that came up both in Behan and the Lords’ Industry and Regulators Committee review: adherence to international quality assurance standards (to facilitate international partnerships and, increasingly, recruitment).

    OfS will “work towards applying to join the European Quality Assurance Register for Higher Education” at the appropriate time – clearly feeling that the long overdue centring of the student voice in quality assurance (there will be an expanded role for and range of student assessors) and the incorporation of a cyclical element (to desk assessments at least) is enough to get them over the bar.

    It isn’t. Principle 2.1 of the EQAR ESG requires that “external quality assurance should address the effectiveness of the internal quality assurance processes” – philosophically establishing the key role of providers themselves in monitoring and upholding the quality of their own provision, with the external assurance process primarily assessing whether (and how well) this has been done. For whatever reason OfS believes the state (in the form of the regulator) needs to be (and is capable of being!) responsible for all, quality assurance everywhere, all the time. It’s a glaring weakness of the OfS system that urgently needs to be addressed. And it hasn’t been, this time.

    The upshot is that while the new system looks ESG-ish, it is unlikely to be judged to be in full compliance.

    Single word judgements

    The recent use of single headline judgements of educational quality being used in ways that have far reaching regulatory implications is hugely problematic. The government announced the abandonment of the old “requires improvement, inadequate, good, and outstanding” judgements for schools in favour of a more nuanced “report card approach” – driven in part by the death by suicide of headteacher Ruth Perry in 2023. The “inadequate” rating given to her Cavendish Primary School would have meant forced academisation and deeper regulatory oversight.

    Regulation and quality assurance in education needs to be rigorous and reliable – it also needs to be context-aware and focused on improvement rather than retribution. Giving single headline grades cute, Olympics-inspired names doesn’t really cut it – and as we approach the fifth redesign of an exercise that has only run six times since 2016 you would perhaps think that rather harder questions need to be asked about the value (and cost!) of this undertaking.

    If we want to assess and control the risks of modular provision, transnational education, rapid expansion, and a growing number of innovations in delivery we need providers as active partners in the process. If we want to let universities try new things we need to start from a position that we can trust universities to have a focus on the quality of the student experience that is robust and transparent. We are reaching the limits of the current approach. Bad actors will continue to get away with poor quality provision – students won’t see timely regulatory action to prevent this – and eventually someone is going to get hurt.

    Source link