Category: Teaching & Learning

  • Building and rebuilding trust in higher education

    Building and rebuilding trust in higher education

    Trust is fundamental to all of our relationships, and it is vital for meaningful relationships.

    It can be an anchor in uncertain times, as explored in this special edition of the International Journal of Academic Development. Within higher education, trust underpins our diverse institutional relationships with students, and their families, friends and supporters; colleagues, regulatory bodies, employers, trade unions, students’ unions, prospective students and schools, international partners as well as local communities and many other groups. These individual interactions combine to build a complex matrix of relationships in which trust originates, takes form or develops.

    Or sometimes, it doesn’t. Uncertainty and complexity can stifle relationships, suppressing trust as partners hold back or withdraw, leading to a crisis in confidence. A lack of trust can derail any relationship, well intended institutional narrative or strategy.

    Having trust often means believing that you matter in some way to a person, or to the people working in an organisation, or system, enough for them to care about your experiences and feelings. It’s possible to trust without being highly engaged, but it’s difficult to get engaged without having trust.

    Trust matters in higher education because universities are there to support individuals to achieve their goals, whether these are in teaching or research. Those individuals need to feel that people and systems are designed to include and support them. Trust has to be earned and it can easily be lost. Reflecting on the many challenges for the UK higher education sector and the multifaceted priorities and constraints it will be impossible to meet the expectations and aspirations of our students, colleagues and partners unless there is trust at every level.

    When we encounter media articles like this one from the Guardian, we are asked to consider the possibility that trust in the whole system of higher education is beginning to fail – perhaps a consequence of massification and a loss of faith in education for its own sake, rather than as a passport to a shrinking pool of traditional jobs. We need to talk about why higher education remains worthwhile, and how we can work together to maintain trust in it and to ensure that students feel their own value as part of its systems.

    Nurturing relationships

    When we build trust we are also building partnerships. When we recognise an institution as trustworthy, we are frequently noting that it delivers on what it has promised and that it values relationships with its stakeholders; it holds itself accountable. And it is not just about the large-scale sector wide challenges, it is also about considering how we build trust through the average everyday experiences of our diverse student and colleague communities.

    Creating trustful spaces in the classroom is one element of this. Teachers’ perception of trust-building has shown that trust is based on teachers’ care and concern for students as much as on their subject knowledge and teaching ability. Research on how students in engineering perceive trust-building efforts also shows that they value attention to them as individuals most highly. They also use their trust in the institution to mitigate perceived problems with individual colleagues or services, believing that the university, or their department, makes student-centred decisions with respect to recruiting and training lecturers and professional services staff, and accepting that occasionally, they may not find an individual teacher trustworthy.

    Trust and accountability also underpin meaningful cultural change in uncomfortable spaces and sensitive areas. When we trust each other we can have difficult conversations and begin to accept the existence of hidden barriers across our diverse colleague and student groups. Inside the university, teams must trust each other, empathising with each other’s views and values – 2024’s report from AdvanceHE and Wonkhe showed that trust is paramount when leading strategic change in challenging times. Because of this, trust underpins institutional sustainability; particularly within a sector that is currently responding to rising costs and income constraints.

    Nurturing relationships through difficult choices about resources and provision requires a fine balance, transparency, and accountability if trust is to be maintained and difficult decisions explained. Few people would continue a relationship in which trust has broken down or with someone or something that they would describe as untrustworthy, but many of use will recognise the situation where this has happened and all parties feel powerless to rebuild the trust.

    What can individuals and leaders do?

    Trust can be expressed in many forms: You can trust me, I trust you, you can trust yourself, you can trust each other. Within a complex array of opportunities and challenges which call for attention, HE institutions will benefit from finding the most appropriate strategies, performance indicators and (regulatory) endorsements which will create trust and accountability in their provision to build their reputation. As leaders, how do we show colleagues that we trust them? How do we encourage others to show that they trust us? What do we do to ensure that we are trustworthy?

    At a larger scale, a trustworthy research partner shares ideas, makes it easy to distribute funding between institutions, invites contributions from stakeholders, colleagues working in the field, and students. A trustworthy community partner supports students and employees from the local area, ensuring that they feel welcome and valued, and uses local services. A trustworthy internationalised university supports cultural diversity and makes both moving to and working with research and teaching easier by explaining practical and organisational differences. By considering how long-term relationships are built and maintained, we can develop a track record of ‘quality’ provision and demonstrate that they are ‘worth it’ to students, colleagues, funders, regulatory bodies, employers and other partners.

    When trust in leaders or institutions is lost, the response is often rapid and drastic, with changes in staff and policies having the potential to create further turbulence. As the research with students showed, trust in institutions and systems can survive individual lapses. Maybe a first step should always be to try to rebuild relationships, making oneself, the university, or the system slightly vulnerable in the short term as we work to show that higher education is a human activity which may sometimes not work out as planned, but which we believe in enough to repair.

    We can work at all kinds of levels to build and foster trust in our activities. Public engagement has the power to counter hostile narratives and build trust and so does effective partnership work with our local communities, students and Students’ Unions. Working together, listening to and valuing our partners’ perspectives enables us to identify and mitigate the impacts of challenges and take a constructive and nuanced approach to build both trust and inclusive learning communities. If we are to tackle our current pressing sector challenges and wicked problems such as awarding gaps when trust in public institutions is low, it has never been more important to collaborate with our partners, be visibly accountable and focus on equity.

    So how can we work together to offer a holistic view of the benefits and value that focusing on trust building can bring? We are keen to build a community of practice to systematically strengthen trust across the HE sector. Join us to develop a trust framework which will explore environments that increase or decrease trust across stakeholder groups and consider how to encourage key trust behaviours such as sharing, listening, and being accountable in a range of professional contexts.

    If you are interested, get in touch and let us know what trust in higher education means to you: Claire Hamshire Rachel Forsyth. Claire and Rachel will be speaking on this theme at the Festival of Higher Education on 11-12 November – find out more and book your ticket here

    Source link

  • We cannot address the AI challenge by acting as though assessment is a standalone activity

    We cannot address the AI challenge by acting as though assessment is a standalone activity

    How to design reliable, valid and fair assessment in an AI-infused world is one of those challenges that feels intractable.

    The scale and extent of the task, it seems, outstrips the available resource to deal with it. In these circumstances it is always worth stepping back to re-frame, perhaps reconceptualise, what the problem is, exactly. Is our framing too narrow? Have we succeeded (yet) in perceiving the most salient aspects of it?

    As an educational development professional, seeking to support institutional policy and learning and teaching practices, I’ve been part of numerous discussions within and beyond my institution. At first, we framed the problem as a threat to the integrity of universities’ power to reliably and fairly award degrees and to certify levels of competence. How do we safeguard this authority and credibly certify learning when the evidence we collect of the learning having taken place can be mimicked so easily? And the act is so undetectable to boot?

    Seen this way the challenge is insurmountable.

    But this framing positions students as devoid of ethical intent, love of learning for its own sake, or capacity for disciplined “digital professionalism”. It also absolves us of the responsibility of providing an education which results in these outcomes. What if we frame the problem instead as a challenge of AI to higher education practices as a whole and not just to assessment? We know the use of AI in HE ranges widely, but we are only just beginning to comprehend the extent to which it redraws the basis of our educative relationship with students.

    Rooted in subject knowledge

    I’m finding that some very old ideas about what constitutes teaching expertise and how students learn are illuminating: the very questions that expert teachers have always asked themselves are in fact newly pertinent as we (re)design education in an AI world. This challenge of AI is not as novel as it first appeared.

    Fundamentally, we are responsible for curriculum design which builds students’ ethical, intellectual and creative development over the course of a whole programme in ways that are relevant to society and future employment. Academic subject content knowledge is at the core of this endeavour and it is this which is the most unnerving part of the challenge presented by AI. I have lost count of the number of times colleagues have said, “I am an expert in [insert relevant subject area], I did not train for this” – where “this” is AI.

    The most resource-intensive need that we have is for an expansion of subject content knowledge: every academic who teaches now needs a subject content knowledge which encompasses a consideration of the interplay between their field of expertise and AI, and specifically the use of AI in learning and professional practice in their field.

    It is only on the basis of this enhanced subject content knowledge that we can then go on to ask: what preconceptions are my students bringing to this subject matter? What prior experience and views do they have about AI use? What precisely will be my educational purpose? How will students engage with this through a newly adjusted repertoire of curriculum and teaching strategies? The task of HE remains a matter of comprehending a new reality and then designing for the comprehension of others. Perhaps the difference now is that the journey of comprehension is even more collaborative and even less finite that it once would have seemed.

    Beyond futile gestures

    All this is not to say that the specific challenge of ensuring that assessment is valid disappears. A universal need for all learners is to develop a capacity for qualitative judgement and to learn to seek, interpret and critically respond to feedback about their own work. AI may well assist in some of these processes, but developing students’ agency, competence and ethical use of it is arguably a prerequisite. In response to this conundrum, some colleagues suggest a return to the in-person examination – even as a baseline to establish in a valid way levels of students’ understanding.

    Let’s leave aside for a moment the argument about the extent to which in-person exams were ever a valid way of assessing much of what we claimed. Rather than focusing on how we can verify students’ learning, let’s emphasise more strongly the need for students themselves to be in touch with the extent and depth of their own understanding, independently of AI.

    What if we reimagined the in-person high stakes summative examination as a low-stakes diagnostic event in which students test and re-test their understanding, capacity to articulate new concepts or design novel solutions? What if such events became periodic collaborative learning reviews? And yes, also a baseline, which assists us all – including students, who after all also have a vested interest – in ensuring that our assessments are valid.

    Treating the challenge of AI as though assessment stands alone from the rest of higher education is too narrow a frame – one that consigns us to a kind of futile authoritarianism which renders assessment practices performative and irrelevant to our and our students’ reality.

    There is much work to do in expanding subject content knowledge and in reimagining our curricula and reconfiguring assessment design at programme level such that it redraws our educative relationship with students. Assessment more than ever has to become a common endeavour rather than something we “provide” to students. A focus on how we conceptualise the trajectory of students’ intellectual, ethical and creative development is inescapable if we are serious about tackling this challenge in meaningful way.

    Source link

  • Academic coaching is data-driven support for students in the dark

    Academic coaching is data-driven support for students in the dark

    Universities offer a wide range of support to students – lecturers’ office hours, personal tutors, study skills advisors, peer-mentoring officers, mental health and wellbeing specialists, and more.

    But even with these services in place, some students still feel they are falling through the cracks.

    Why? One of the most common pieces of student feedback might offer a clue – “I wish I had known you and come to you earlier”.

    Within the existing system, most forms of support rely on students to take the first step – to reach out, refer themselves, or report a problem.

    But not all students can or will: some are unsure who to turn to, others worry about being judged, and many feel too overwhelmed to even begin. These are the students who often disappear from view – not because support does not exist, but because they cannot access it in time.

    Meanwhile, academics are stretched thin by competing research and teaching demands, and support teams – brilliant though they are – can only respond once a student enters this enquiry-response support system.

    Systematic support that requires courage

    As a result, students struggling silently often go unnoticed: for those “students in the dark”, there is often no obvious red flag for support services to act on until it is too late.

    NSS data in recent years reveal a clear pattern of student dissatisfaction with support around feedback and independent study, indicating a growing concern and demand for help outside the classroom.

    While the existing framework works well for those confident and proactive students, without more inclusive and personalised mechanisms in place, we risk missing the very group who would benefit most from early, student-centred support.

    This is where academic coaching comes in. One of its most distinctive features is that it uses data not as an outcome, but as a starting point. At Buckinghamshire New University, Academic Coaches work with an ecosystem of live data – attendance patterns, assessment outcomes, and engagement time with the VLE – collaborating closely with data intelligence and student experience teams to turn these signals into timely action.

    While our academic coaching model is still in its early phase, we have developed simulated student personae based on common disengagement patterns and feedback from colleagues. These hypothetical profiles help us shape our early intervention strategies and continuously polish our academic coaching model.

    For example, “Joseph”, a first-year undergraduate (level 4) commuter student, stops logging into the VLE midway through the term. Their engagement drops from above cohort average to zero and stays that way for a week. In the current system, this might pass unnoticed.

    But through live data monitoring, we can spot this shift and reach out – not to reprimand but to check in with empathy. Having been through the student years, many of us know, and even still remember, what it is like to feel overwhelmed, isolated, or simply lost in a new environment. The academic coaching model allows us to offer a gentle point of re-entry with either academic or pastoral support.

    One thing to clarify – data alone does not diagnose the problem – but it does help identify when something has changed. It flags patterns that suggest a student might be struggling silently, giving us the opportunity to intervene before there is a formal cause for concern. From there, we Academic Coaches reach out with an attentive touch: not with a warning, but with an invitation.

    This is what makes the model both scalable and targeted. Instead of waiting for students to self-refer or relying on word of mouth, we can direct time and support where it is likely to matter most – early, quietly, and personally.

    Most importantly, academic coaching does not reduce students to data points. It uses data to ask the right questions and to guide an appropriate response. Why has this student disengaged? Perhaps something in their life has changed.

    Our role is to notice this change and offer timely and empathetic support, or simply a listening ear, before the struggle becomes overwhelming. It is a model that recognises the earlier we notice and act, the greater the impact will be. Sometimes, the most effective student support begins not with a request, but with a well-timed email in the student’s inbox.

    Firefighting? Future-proofing

    The academic coaching model is not just about individual students – it is about rethinking how this sector approaches student support at a time of mounting pressure. As UK higher education institutions face financial constraints, rising demand, and increasing complexity in students’ needs, academic coaching offers a student-centred and cost-effective intervention.

    It does not replace personal tutors or other academic or wellbeing services – instead, it complements them by stepping in earlier and guiding students toward appropriate support before a crisis hits.

    This model also helps relieve pressure on overstretched academic staff by providing a clearly defined, short-term role focused on proactive engagement – shifting the approach from reactive firefighting to preventative care.

    Fundamentally, academic coaching addresses a structural gap: some students start their university life already at a disadvantage – unsure how to fit into this new learning environment or make use of available support services to become independent learners – and the current system often makes it harder for them to catch up.

    While the existing framework tends to favour confident and well-connected students, academic coaching helps rebalance the system by creating a more equitable pathway into support – one that is data-driven yet recognises and respects each student’s uniqueness. In a sector that urgently needs to do more with less, academic coaching is not just a compassionate gesture, but a future-facing venture.

    That said, academic coaching is not a silver bullet and it will not solve every problem or reach every student. From our discussions with colleagues and institutional counterparts, one of the biggest challenges identified – after using data to flag students – is actually getting them on board with the conversation.

    Like all interventions, academic coaching needs proper investment, training, interdepartmental cooperation, clear role boundaries, and a scalable framework for evaluating impact.

    But it is a timely, student-centred response to a gap that traditional structures often miss – a role designed to notice what is not being said, to act on early warning signs, and to offer students a safe place to re-engage.

    As resources tighten and expectations grow, university leadership must invest in smarter, more sensible forms of support. Academic coaching offers not just an added layer – it is a reimagining of how we gently guide students back on track before they drift too far from it.

    Source link

  • Inclusivity should be about more than individual needs

    Inclusivity should be about more than individual needs

    Assessment lies at the core of higher education. It helps focus students’ learning and helps them evidence, to themselves and to others, the progress they have made in their learning and growth.

    Setting, supporting and marking assessed student work takes up a substantial proportion of academic colleagues’ effort and time.

    Approaches to assessment and outcomes of assessment experiences underpin the narratives crafted by many higher education providers to showcase how they secure meaningful educational gains for their students.

    It’s not just what you know

    Educational gains go well beyond academic assessment, yet assessment is central to student experiences and should not be limited to academic knowledge gains. Indeed, a nuanced and insightful independent report commissioned by the Office for Students in March 2024 on how educational gains were defined and articulated in TEF 2023 submissions notes that providers rated gold for student outcomes

    “make reference to enhancing student assessment practices as a vehicle for embedding identified educational gains into the curriculum, explaining that their range of assessments is designed to assess beyond subject knowledge.”

    Assessments that require evidence of learning beyond subject knowledge are a particularly pertinent point to ponder, because these assessments are more likely to underpin the kind of inclusive higher education experiences that providers hope to create for their students, with inclusion understood in broad rather than narrow terms.

    The link between inclusion and assessment has been problematised by scholars of higher education. A narrow view of inclusive assessment focuses on individual adjustments in response to specific student needs. Higher education providers, however, would benefit from developing a broad definition of inclusive assessment if they are intent on meaningfully defining educational gains. Such a definition will need to move beyond implementing individual adjustments on a case by case basis, to consider intersecting and diverse student backgrounds that may impact how a student engages with their learning.

    Well-defined

    A good definition should also be mindful of (but not constrained by) needs and priorities articulated by external bodies and employers. It should be based on a thorough understanding of how to create equitable student assessment experiences in interdisciplinary settings (being able to operate flexibly across disciplines is key to solving societal challenges). It should appreciate that bringing co and extra-curricular experiences into summative assessment does not dilute a course or programme academic core.

    It should be aligned to a view of assessment for and as learning. It should value impact that goes beyond individual student achievement and is experienced more broadly in the assessment context. Importantly, it should embrace the potential of generative artificial intelligence to enhance student learning while preserving the integrity of assessment decisions and the need for students to make responsible use of generative tools during and beyond their studies.

    All higher education providers are likely to be able to find at least some examples of good, broadly defined inclusive practice in their contexts – these may just need to be spotlighted for others to consider and engage with. To help with this task, providers should be exploring

    • · Who is included in conversations about what is assessed, when and how?
    • · How fully are experiences outside a more narrowly defined academic curriculum core included in summative evaluative judgements about student achievement of intended and desired outcomes?
    • · To what extent does the range of assessments within a course or programme include opportunities for students to have their most significant strengths developed and recognised?

    Providers should develop their own narratives and frameworks of educational gains to create full inclusion in and through assessment. As they carefully implement these (implementation is key), they may also consider not just the gains that can be evidenced but also whether they could attract, welcome and evidence gains for a broader range of students than might have been included in the providers’ initial plans.

    And suppose energy to rethink assessment reaches a low point. In that case, it will be useful to remember that insufficient attention to inclusion, broadly defined, when assessing learning and measuring gains can (inadvertently) create further disadvantage for individuals, as it preserves the system that created the disadvantage in the first place.

    Source link

  • We need to talk about culture

    We need to talk about culture

    For a few years now when touring around SUs to deliver training over the summer, me and my colleague Mack (and in previous years, Livia) have been encountering interesting tales of treatment that feel different but are hard to explain.

    We tend to kick the day off with a look at the educational journey of student leaders – the highs and lows, the setbacks and triumphs, all in an attempt to identify the aspects that might have been caused (or at least influenced) by institutional or wider higher education policy.

    And while our daft and dated student finance system, the British obsessions with going straight in and completing at top speed, or local policies on assessment or personal tutoring or extenuating circumstances all get a look in, more often than not it’s something else that has caused a problem.

    It’s the way a member of staff might have responded to a question; the reaction to a student who’s loaded up with part-time work or caring responsibilities; the way in which extracurriculars are considered in a meeting on study progress; the background discussions in a misconduct panel (which, for some reason, the sector still routinely forces student leaders on to); or the way in which departmental or local discretion in policy implementation might have been handled by a given school or department.

    Sometimes the differences are apparent to a student that’s well-connected, or one that’s experienced a joint award, or one that’s ended up winning their election having completed their PGT at another university (often in another country) to those who haven’t. Often, the differences are invisible.

    It was especially obvious in the years that followed those “no detriment” policies that popped up during Covid. Not all ND policies were the same, but just for a moment we seemed to have moved into an era where the pace at which someone completed and the number of attempts they’d had at doing so seemed less important than whether they’d reached the required standard.

    The variable speed and enthusiasm accompanying the introduction of “no detriment” policies was telling in and of itself – but more telling was the snapping back and abolition of many of the measures designed to cope with student difference and setbacks just as soon as the masking mandates were over.

    Sometimes the differences are about the nuts and bolts of policies that can be changed and amended through the usual round of committee work. Sometimes they’re about differences in volumes of international students, or wild differences in the SSR that central policies pretend aren’t there. But often, especially the ones that are apparent not to them but to us, they’re differences that seem to say something about the way things are done there.

    They are, in other words, about culture.

    Aqui não se aprende, sobrevive-se

    I’d been trying to put my finger on a way to describe a particular thread in the explanations for years – was it a misplaced notion of excellence? Something about the Russell Group, or STEM? Something about those subjects that are externally accredited, or those that fall into the “higher technical” bracket? Or was it about working with the realities of WP?

    But earlier this year, I think I got close. We’d accidentally booked a cheap hotel in Lisbon for one of our study tours that just happened to be opposite Tecnico – the “higher technical” faculty of the University of Lisbon (“Instituto Superior Técnico”) that has been turfing out Portugal’s most respected engineers (in the broadest sense of the term) since 1911.

    And buried in one of those strategy documents that we tend to harvest on the trips was a phrase that said it all – what students had described back in 2019 as a “meritocracia da dificuldade”, or in English, a “meritocracy of difficulty”.

    Courses at Técnico were known to be hard – even one of our Uber drivers knew that – but that had in and of itself had become the institution’s defining currency. Students, staff, and alumni alike described an environment where gruelling workloads, high failure rates and dense, encyclopaedic syllabi were worn as badges of honour.

    Passing through that kind of system was not just about acquiring knowledge – but about proving your ability to endure and survive, with employers reinforcing the story by recruiting almost unquestioningly on the basis of survival.

    Se os alunos não aguentam, não deviam estar aqui

    Academic staff featured prominently in sustaining that culture. Having themselves been shaped by the same regime, many prided themselves on reproducing it for the next generation.

    Any move to reduce content, rebalance workloads, or broaden learning was interpreted as an unacceptable form of “facilitation”, “spoon feeding”, “dumbing down” or pandering. What counted, in their eyes, was difficulty itself – with rigour equated less with the quality of learning than with the sheer weight of what had to be endured.

    The insistence on difficulty carried consequences for students. Its emphasis on exams, for example, meant that learning became synonymous with “studying to pass”, rather than a process of deep engagement.

    The focus often fell on maximising tactics to get through, rather than on cultivating lasting understanding. In turn, students grew risk-averse – seeking out past papers, recycling lab work, and avoiding uncertainty, rather than developing the capacity to tackle open-ended problems.

    O Técnico orgulha-se das reprovações

    Non-technical subjects were also undervalued and looked down upon in that climate. Humanities and social sciences were frequently dismissed by staff and students alike as “soft” or “fluffy”, in contrast with the “seriousness” of technical content. That hierarchy of value both narrowed students’ horizons and reinforced the sense that only subjects perceived as hard could be respected.

    It left little room for reflection on social, ethical, or cultural dimensions of high level technical education – and contributed in turn to a broader lack of extracurricular and associative engagement that caused problems later in the workplace.

    And underlying all of that was the sheer pressure placed on students. The combination of high workload, repeated failure, and a culture that equated merit with suffering created an environment where wellbeing was routinely sacrificed to performance.

    Scattered timetables, heavy workloads, and complex commuting patterns left little space for students to build social connections or help each other to cope. And those demanding schedules and long travel times also discouraged students from building a connection with the institution beyond the academics assessing them.

    Staff, proud of having survived themselves, were routinely unsympathetic to students who struggled, and the system’s inefficiency – with many repeating units year after year – was both demoralising and costly. For some, the relentless pressure became part of their identity – for others, it was simply crushing.

    As humanidades são vistas como perda de tempo. Só conta o que dói

    I recognise much of what’s in the Committee on Review of Education, and Pedagogical Practices of the IST CAMEPP report in the discussions we’ve had with student leaders. We may not have the non-continuation or time-to-complete issues (although a dive into OfS’ dashboards suggests that some departments very much do) – but the “culture” issues in there very much sound familiar.

    One officer told me about an academic who, when they explained they’d had to pick up more shifts in their part-time job to cover rent, sniffed and said that university “wasn’t meant for people who had to work in Tesco.”

    The implication wasn’t subtle – success was contingent on being able to study full-time, with no distractions, no commitments, and no compromises. The message was that working-class students were in the wrong place.

    Another described a personal tutor meeting where extracurricular involvement was treated as a sign of distraction – a dangerous indulgence. A student who had been pouring energy into running their society was solemnly advised to “park the hobbies” until after graduation, as though the skills, friendships, and confidence gained outside the classroom were worth nothing compared to a clean transcript.

    The sense of suspicion towards student life beyond the lecture theatre was as striking as it was disheartening for a commuter student who’d only found friends in this way.

    We’ve heard countless variations of staff dismissing pleas for help with mental health, reframing them as either “just stress” or, worse, a valuable rite of passage. One student leader said they’d been told by a tutor that “a bit of pressure builds character,” as if panic attacks were proof of academic seriousness. In that culture, resilience was demanded, but never supported.

    We’ve also heard about students being told that missing a rehearsal for a hospital appointment would “set the wrong precedent,” or that seeking an extension on a piece of groupwork after a bereavement was “unfair on others.”

    Others describe the quiet pressure to keep going after failing a module – not with support to improve, partly because the alternative offered was repeating the year, all with the subtle suggestion that “some people just aren’t cut out for this.” Much suggests a yearning for the students of the past – rather than a view on what the actual students need in the future.

    Quando pedimos ajuda, dizem-nos que todos já passaram por isto

    There are tales of students told that asking questions in lectures shows they “haven’t done the reading,” or that group work is best approached competitively rather than collaboratively – each message subtly reinforcing a culture of endurance, suspicion, and survival rather than one of learning and growth.

    Then there are the stories about labs where “presenteeism” rules supreme – students dragging themselves in while feverish because attendance is policed so tightly that missing a practical feels like academic self-sabotage.

    Or the sense, especially in modules assessed exclusively (or mainly) through a single high-stakes exam, that students are competing in a kind of intellectual Hunger Games – one chance, one shot, no mistakes – a structure that turns learning into a gamble, and turns peers into rivals.

    Some of it is structural – student finance systems in the UK are especially unforgiving of setbacks, reductions in intensity and differences in learning pace. Some of it is about UK perceptions of excellence – the ingrained idea that second attempts can only be granted if a student fails, and even then capped, or the idea that every assessment beyond Year 1 needs to be graded rather than passed or failed, or it can’t be “excellent”.

    But much of it was just about attitudes.

    Facilitar seria trair a tradição do Técnico

    Again and again, what has struck me hasn’t been the formal policy frameworks, but the tone of the replies students received – the raised eyebrow when someone asked about getting an extension, the sigh when a caring responsibility was mentioned, the laugh when a student suggested their part-time job was making study harder, the failure to signpost when others would.

    It was the quick dismissal of a concern as “excuses,” the insistence that “everyone’s under pressure,” or the sharp rebuke that “the real world doesn’t give second chances.” To those delivering them, they may have just been off-hand comments from those themselves under pressure – but to students, they were signals, sometimes subtle, sometimes stark, about who belonged, who was valued, and what counted as legitimate struggle.

    And worse, for those student leaders going into a second year, it was often a culture that was hidden. Large multi-faculty universities in the UK tend to involve multiple faculties, differing cultures and variable levels of enthusiasm towards compliance with central policies or improvement initiatives.

    Almost every second-year student leader I’ve ever met can pick out one part of the university that doesn’t play ball – where the policies have changed, but the attitudes haven’t.

    And they seem to know someone who was a champion for change, only to leave when confronted with the loudest voices in a department or committee that seem determined to participate only to resist it.

    Menos carga lectiva, mas isso é infantilizar o ensino

    Back at Tecnico, the CAMEPP commission’s diagnosis was fascinating. It argued that while Técnico’s “meritocracy of difficulty” had historically served as a guarantee of quality and employability, it had become an anachronism.

    Curricula were monolithic and encyclopaedic, often privileging sheer quantity of content over relevance or applicability. The model encouraged competition over collaboration, generated high failure rates, and wasted talent by grinding down those without the stamina — or privilege — to withstand its demands.

    The report argued that the culture not only demoralised students – but also limited Técnico’s global standing. In an era of rapid change, interdisciplinarity, and international mobility, the school’s rigidity risked undermining its attractiveness to prospective students and its capacity for innovation.

    Employers still valued Técnico graduates, but the analysis warned that the institution was trading on its past reputation, rather than equipping students for uncertain futures.

    For students, the practical impact was devastating. With teaching dominated by lectures and assessment dominated by exams, learning was often reduced to tactical preparation for high-stakes hurdles. A culture that equated merit with suffering left little space for curiosity, creativity, or critical reflection.

    Non-technical subjects were trivialised, narrowing graduates’ horizons and weakening their ability to engage with the ethical, political, and social contexts in which engineers inevitably operate.

    For staff, the culture had become self-perpetuating. Academics were proud of having endured the same system, and resistant to change that looked like dilution. Attempts to rebalance workloads or integrate humanities were dismissed as spoon-feeding, and student pleas for support were reframed as evidence of weakness. What looked like rigour was, in practice, an institutionalised suspicion of anything that might reduce pressure.

    Temos de formar pessoas, não apenas engenheiros

    Against that backdrop, the Técnico 2122 programme was deliberately framed as more than a curriculum reform. The commission argued that without tackling the underlying values and assumptions of the institution, no amount of modular tinkering would deliver meaningful change.

    It set out a vision in which Técnico would be judged not only by the toughness of its courses but by the quality of its culture, the richness of its environment, and the breadth of its graduates’ capacities. The emphasis was on moving from a survival ethos to a developmental one — a school where students were expected to grow, not simply endure.

    One strand of the proposals was the deliberate insertion of humanities, arts and social sciences into the heart of the curriculum. It introduced nine credits of HASS in the first cycle, including courses in ethics, public policy, international relations and the history of science – all to to disrupt the hierarchy that had long placed technical content above all else.

    It was presented not as a softening of standards but as an enrichment, equipping future engineers with the critical, ethical and societal awareness to operate in a world where technical solutions always have human consequences. The language of “societal thinking” was used to capture that broader ambition — an insistence that engineering could no longer be conceived apart from the contexts in which it is deployed.

    Preparado para colaborar, não apenas competir

    Another aspect was a rebalancing of assessment. Instead of relying almost exclusively on high-stakes examinations, the proposals argued for a model in which exams and continuous assessment carried roughly equal weight. The aim was to break the cycle of cramming and repetition, and to create incentives for sustained engagement across the semester.

    Via rewarding consistent work and collaborative projects, reforms intended to shift students away from tactical “study to pass” behaviour towards deeper and more creative forms of learning. A parallel ambition was to build more interdisciplinarity — using integrated projects and cross-departmental collaboration to replace competitive isolation with teamwork across different branches of engineering.

    Just as important was the recognition that culture is shaped beyond the classroom. The plan envisaged new residences and more spaces for social, cultural and recreational activity, developed in partnership with the wider university. These weren’t afterthoughts – but central to the project, a way of countering the lack of associative life that the workload and commuting patterns had made so difficult.

    And alongside new facilities came the proposal to give formal curricular recognition to extracurricular involvement — a statement that student societies, voluntary projects and civic engagement mattered as part of the Técnico experience.

    The review committed to embedding both extracurricular credit and communal spaces into the fabric of the institution, all with an aim of generating a more balanced, human environment – one in which students could belong as well as perform.

    And in conjunction with the SU, every programme has an academic society that students can access and get involved in – combining belonging, careers, study skills and welcome activity in a way that gives every student a community they can serve in, as well as both a representative body (rather than just a representative) at faculty and university level to both develop constructive agendas for change and bespoke student-led interventions at the right level.

    At every stage, the commission stressed that this was a cultural and emotional transformation as much as it was a structural one – requiring staff and students alike to accept that the old ways no longer served them best.

    Change management was presented as a challenge of mindset as much as of design. It was not enough to alter syllabi or redistribute credits – the ambition was to cultivate an atmosphere where excellence was defined by collaboration, creativity and societal contribution rather than by survival alone.

    I don’t know how successful the reforms have been, or whether they’ve met the ambitions set in the astonishingly long review document. But what I do know is they found inspiration from higher technical universities and faculties from around the world:

    • Delft University of Technology in the Netherlands had been experimenting with “challenge-based” learning, where interdisciplinary teams of students work on open-ended, real-world problems with input from industry and civic partners.
    • ETH Zurich in Switzerland had sought to rebalance its exam-heavy culture by integrating continuous assessment and project work, with explicit emphasis on collaboration and reflection rather than competition alone.
    • Aalto University in Finland had deliberately merged technology, business, and arts to break down disciplinary silos, embedding creativity and design into engineering programmes and fostering a stronger culture of interdisciplinarity.
    • Chalmers University of Technology in Sweden had restructured large parts of its curriculum around project-based learning, placing teamwork and sustained engagement at the centre of assessment instead of single high-stakes hurdles.
    • Technical University of Munich (TUM) had introduced entrepreneurship centres, interdisciplinary labs, and credit for extracurricular involvement to underline the learning and innovation often happen outside formal classrooms.
    • And École Polytechnique in Paris had sought to rebalance its notoriously demanding technical curriculum with a stronger grounding in humanities and social sciences, aiming to cultivate graduates able to navigate the societal dimensions of scientific and technological progress.

    Criatividade e contributo, não apenas sobrevivência

    There are real lessons here. I’ve talked before about the way the autonomous branding and decision-making in the faculty at Lison surfaces higher technical in a way that those who harp on about 1992 and the abolition of polytechnics can’t see back in the UK.

    But the case study goes further for me. On all of the “student focussed” agendas – mental health, disability, commuters, diversity, there’s invariably a working group and a policy review where one or more bits of a university won’t, don’t and never will play ball.

    A couple of decades of focus on the “student experience” have seen great strides and changes to the way the sector supports students and scaffolds learning. But most of those working in a university know that yet another review won’t change that one bit – especially if its research figures are strong and it’s still recruiting well.

    Part of the problem is the way in which student culture fails to match up to the structures of culture in the modern UK university. 1,500 course reps is a world of difference to associative structures at school, faculty or department level. Both universities and SUs have much to learn from European systems about the way in which the latter cause issues of retention, or progression or even just recruitment to be “owned” by student associations.

    Some of it is about course size. What we think of as a “course” would be one pathway inside much bigger courses with plenty of choice and optionality in Europe. The slow erosion of elective choice in the UK makes initiatives like those seen elsewhere harder, not easier – but who’s brave enough to go for it when every other university seems to have 300 programme leaders rather than 30?

    But it’s the faculty thing that’s most compelling. What Técnico’s review shows is that a faculty can take itself seriously enough to undertake a searching cultural audit – not just compliance with a curriculum refresh, but a root-and-branch reflection on what it means to be educated there, in the context of the broader discipline and the way that discipline is developing around the world.

    It raises an obvious question – why don’t more faculties here do the same? Policy development in the UK almost always happens at the university level, often driven by external regulatory pressure, and usually framed in language so generic that it misses the sharp edges of disciplinary culture.

    But it’s the sharp edges – the tacit assumptions about what counts as “hard” or “serious”, the informal attitudes of staff towards struggling students, the unspoken hierarchies of value between technical and social subjects – that so often define the student experience in practice.

    A review of the sort that Técnico and others undertook forces the assumptions into the open. It makes it harder for a department to dismiss humanities as “fluffy” or to insist that wellbeing struggles are just rites of passage when the evidence has been gathered, collated, and written down.

    It gives students’ unions a reference point when they argue for cultural change, and it creates a shared vocabulary for both staff and students to talk about what the institution is, and what it wants to be. That kind of mirror is uncomfortable – but it’s also powerful.

    And if nothing else, the review reminds us that culture is not accidental. It is constructed, transmitted, and defended – sometimes with pride, sometimes with inertia. The challenge is whether faculties here might be brave enough to interrogate their own meritocracies of difficulty, to ask whether the traditions they prize are really preparing students for the future, or whether they are just reproducing a cycle of survival.

    That’s a process that can’t be delegated up to the university centre, nor imposed by a regulator. It has to come from within – which makes me wonder whether finding those students and staff who find the culture where they work oppressive need to be surfaced  and connected – before the usual suspects (that are usually suspect) do the thing they always do, and preserve rather than adapt.

    Source link

  • Is peer review of teaching stuck in the past?

    Is peer review of teaching stuck in the past?

    Most higher education institutions awarded gold for the student experience element of their 2023 Teaching Excellence Framework (TEF) submissions mentioned peer review of teaching (PRT).

    But a closer look at what they said will leave the reader with the strong impression that peer review schemes consume lots of time and effort for no discernible impact on teaching quality and student experience.

    What TEF showed us

    Forty out of sixty providers awarded gold for student experience mention PRT, and almost all of these (37) called it “observation.” This alone should give pause for thought: the first calls to move beyond observation towards a comprehensive process of peer review appeared in 2005 and received fresh impetus during the pandemic (see Mark Campbell’s timely Wonkhe comment from March 2021). But the TEF evidence is clear: the term and the concept not only persist, but appear to flourish.

    It gets worse: only six institutions (that’s barely one in ten of the sector’s strongest submissions) said they measure engagement with PRT or its impact, and four of those six are further education (FE) colleges providing degree-level qualifications. Three submissions (one is FE) showed evidence of using PRT to address ongoing challenges (take a bow, Hartpury and Plymouth Marjon universities), and only five institutions (two are FE) showed any kind of working relationship between PRT and their quality assurance processes.

    Scholarship shows that thoughtfully implemented peer review of teaching can benefit both the reviewer and the reviewed but that it needs regular evaluation and must adapt to changing contexts to stay relevant. Sadly, only eleven TEF submissions reported that their respective PRT schemes have adapted to changing contexts via steps such as incorporating the student voice (London Met), developing new criteria based on emerging best practice in areas such as inclusion (Hartpury again), or a wholesale revision of their scheme (St Mary’s Twickenham).

    The conclusion must be that providers spend a great deal of time and effort (and therefore money) on PRT without being able to explain why they do it, show what value they get from it, or even ponder its ongoing relevance. And when we consider that many universities have PRT schemes but didn’t mention them, the scale of expenditure on this activity will be larger than represented by the TEF, and the situation will be much worse than we think.

    Why does this matter?

    This isn’t just about getting a better return on time and effort; it’s about why providers do peer review of teaching at all, because no-one is actually required to do it. The OfS conditions of registration require higher education institutions to “provide evidence that all staff have opportunities to engage in reflection and evaluation of their learning, teaching, and assessment practice”.

    Different activities can meet the OfS stipulation, such as team teaching, formal observations for AdvanceHE Fellowship, teaching network discussions, microteaching within professional development settings. Though not always formally categorised within institutional documentation, these nevertheless form part of the ecosystem under which people seek or engage with review from peers and represent forms of peer-review adjacent practice which many TEF submissions discussed at greater length and with more confidence than PRT itself.

    So higher education institutions invest time and effort in PRT but fail to explain the benefits of their reasoning, and appear to derive greater value from alternative activities that satisfy the OfS. Yet PRT persists. Why?

    What brought us to this point?

    Many providers will find that their PRT schemes were started or incorporated into their institutional policies around the millennium. Research from Crutchley and colleagues identified Brenda Smith’s HEFCE-funded project at Nottingham Trent in the late 1990s as a pioneering moment in establishing PRT as part of the UK landscape, following earlier developments in Australia and the US. Research into PRT gathered pace in the early 2000s and reached a (modest) peak in around 2005, and then tailed off.

    PRT is the Bovril of the education cupboard. We’re pretty sure it does some good, though no one is quite sure how, and we don’t have time to look it up. We use it maybe once a year and are comforted by its presence, even though its best before date predates the first smartphones, and its nutritional value is now less than the label that bears its name. The prospect of throwing it out induces an existential angst – “am I a proper cook without it?” – and yes of course we’d like to try new recipes but who has the time to do that?

    Australia shows what is possible

    There is much to be learnt from looking outside our own borders, on how peer review has evolved in other countries. In Australia, the 2024 Universities Accord offered 47 recommendations as part of a federally funded vision for tertiary education reform for 2050. The Accord was reviewed on Wonkhe in March 2024.

    One of its recommendations advocates for the “increased, systematised use of peer review of teaching” to improve teaching quality, insisting this “should be underpinned by evidence of effective and efficient methodologies which focus on providing actionable feedback to teaching staff.” The Accord even suggested these processes could be used to validate existing national student satisfaction surveys.

    Some higher education institutions, such as The University of Sydney, had already anticipated this direction, having revised their peer review processes with sector developments firmly in mind a few years ahead of the Accord’s formal recommendations. A Teaching@Sydney blog post from March 2023 describes how the process uses a pool of centrally trained and accredited expert reviewers, standardised documentation aligned with contemporary, evidence-based teaching principles, and cross-disciplinary matching processes that minimise conflicts of interest, while intentionally integrating directly with continuing professional development pathways and fellowship programs. This creates a sustainable ecosystem of teaching enhancement rather than isolated activities, meaning the Bovril is always in use rather than mouldering behind Christmas’s leftover jar of cranberry sauce.

    Lessons for the UK

    Comparing Australia and the UK draws out two important points. First, Australia has taken the simple but important step of saying PRT has a role in realising an ambitious vision for HE. This has not happened in the UK. In 2017 an AdvanceHE report said that “the introduction and focus of the Teaching Excellence Framework may see a renewed focus on PRT” but clearly this has not come to pass.

    In fact, the opposite is true, because the majority of TEF Summary Statements were silent on the matter of PRT, and there seemed to be some inconsistency in judgments in those instances where the reviewers did say something. In the absence of any explanation it is hard to understand why they might commend the University of York’s use of peer observation on a PG Cert for new staff, but judge that the University of West London meeting their self-imposed target of 100 per cent completion of teaching observations every two years for all academic permanent staff members was “insufficient evidence of high-quality practice.”

    Australia’s example sounds rather top-down, but it’s sobering to realise that they are probably achieving more impact for the cost of less time and effort than their UK colleagues, if the TEF submissions are anything to go by.

    And Australia is clear-sighted about how PRT needs to be implemented for it to work effectively, and how it can be joined up with measures such as student satisfaction surveys that have emerged since PRT first appeared over thirty years ago. Higher education institutions such as Sydney have been making deliberate choices about how to do PRT and how to integrate it with other management, development and recognition processes – an approach that has informed and been validated by the Universities Accord’s subsequent recommendations.

    Where now for PRT?

    UK providers can follow Sydney’s example by integrating their PRT schemes with existing professional development pathways and criteria, and a few have already taken that step. The FE sector affords many examples of using different peer review methods, such as learning walks and coaching in combination. University College London’s recent light refresh of its PRT scheme shows that management and staff alike welcome choice.

    A greater ambition than introducing variety would be to improve reporting of program design and develop validated tools to assess outcomes. This would require significant work and sponsorship from a body such as AdvanceHE, but would yield stronger evidence about PRT’s value for supporting teaching development, and underpin meaningful evaluation of practice.

    This piece is based on collaborative work between University College London and the University of Sydney examining peer review of teaching processes across both institutions. It was contributed by Nick Grindle, Samantha Clarke, Jessica Frawley, and Eszter Kalman.

    Source link

  • Peer review is broken, and pedagogical research has a fix

    Peer review is broken, and pedagogical research has a fix

    An email pings into my inbox: peer reviewer comments on your submission #1234. I take a breath and click.

    Three reviewers have left feedback on my beloved paper. The first reviewer is gentle, constructive, and points out areas where the work could be tightened up. One reviewer simply provides a list of typos and points out where the grammar is not technically correct. The third reviewer is vicious. I stop reading.

    Later that afternoon, I sit in the annual student assessment board for my department. Over a painstaking two hours, we discuss, interrogate, and wrestle with how we, as educators, can improve our feedback practices when we mark student work. We examine the distribution of students marks closely, looking out for outliers, errors, or evidence of an ill-pitched assessment. We reflect upon how we can make our written feedback more useful. We suggest thoughtful and innovative ways to make our practice more consistent and clearer.

    It then strikes me how these conversations happen in parallel – peer review sits in one corner of academia, and educational assessment and feedback sits in another. What would happen, I wonder, if we started approaching peer review as a pedagogical problem?

    Peer review as pedagogy

    Peer review is a high stakes context. We know that we need proper, expert scrutiny of the methodological, theoretical, and analytical claims of research to ensure the quality, credibility, and advancement of what we do and how we do it. However, we also know that there are problems with the current peer review system. As my experience attests to, issues including reviewer biases and conflicts, lack of transparency in editorial decision-making, inconsistencies in the length and depth of reviewer feedback all plague our experiences. Peer reviewers can be sharp, hostile, and unconstructive. They can focus on the wrong things, be unhelpful in their vagueness, or miss the point entirely. These problems threaten the foundations of research.

    The good news is that we do not have to reinvent the wheel. For decades, people in educational research, or the scholarship of teaching and learning (SoTL), have been grappling both theoretically and empirically with the issue of giving and receiving feedback. Educational research has considered best practices in feedback presentation and content, learner and marker feedback literacies, management of socioemotional responses to feedback, and transparency of feedback expectations. The educational feedback literature is vast and innovative.

    However – curiously – efforts to improve the integrity of peer review don’t typically frame this as a pedagogical problem, that can borrow insights from the educational literature. This is, I think, a woefully missed opportunity. There are at least four clear initiatives from the educational scholarship that could be a useful starting point in tightening up the rigour of peer review.

    What is feedback for?

    We would rarely mark student work without a clear assessment rubric and standardised assessment criteria. In other words, as educators we wouldn’t sit down to assess students work without at least first considering what we have asked them to do. What are the goalposts? What are the outcomes? What are we giving feedback for?

    Rubrics and assessment criteria provide transparent guidelines on what is expected of learners, in an effort to demystify the hidden curriculum of assessment and reduce subjectivity in assessment practice. In contrast, peer reviewers are typically provided with scant information about what to assess manuscripts for, which can lead to inconsistencies between journal aims and scope, reviewer comments, and author expectations.

    Imagine if we had structured journal-specific rubrics, based on specific, predefined criteria that aligned tightly with the journal’s mission and requirements. Imagine if these rubrics guided decision-making and clarified the function of feedback, rather than letting reviewers go rogue with their own understanding of what the feedback is for.

    Transparent rubrics and criteria could also bolster the feedback literacy of reviewers and authors. Feedback literacy is an established educational concept, which refers to a student’s capacity to appreciate, make sense of, and act upon their written feedback. Imagine if we approached peer review as an opportunity to develop feedback literacy, and we borrowed from this literature.

    Do we all agree?

    Educational research clearly highlights the importance of moderation and calibration for educators to ensure consistent assessment practices. We would never allow grades to be returned to students without some kind of external scrutiny first.

    Consensus calibration refers to the practice of multiple evaluators working together to ensure consistency in their feedback and to agree upon a shared understanding of relevant standards. There is a clear and robust steer from educational theory that this is a useful exercise to minimise bias and ensure consistency in feedback. This practice is not typically used in peer review.

    Calibration exercises, where reviewers assess the same manuscript and have opportunity to openly discuss their evaluations, might be a valuable and evidence-based addition to the peer review process. This could be achieved in practice by more open peer review processes, where reviewers can see the comments of others and calibrate accordingly, or through a tighter steer from editors when recruiting new reviewers.

    That is not to say, of course, that reviewers should all agree on the quality of a manuscript. But any effort to consolidate, triangulate, and calibrate feedback can only be useful to authors as they attempt to make sense of it.

    Is this feedback timely?

    Best practice in educational contexts also supports the adoption of opportunities to provide formative feedback. Formative feedback is feedback that helps learners improve as they are learning, as opposed to summative feedback whereby the merit of a final piece of work is evaluated. In educational contexts, this might look like anything from feedback on drafts through to informal check-in conversations with markers.

    Applying the formative/summative distinction to peer review may be useful in helping authors improve their work in dialogue with reviewers and editors, rather than purely summative, which would merely judge whether the manuscript is fit for publication. In practice, adoption of this can be achieved through the formative feedback offered by registered reports, whereby authors receive peer review and editorial direction before data is collected or accessed, at a time where they can actually make use ot it.

    Formative feedback through the adoption of registered reports can provide opportunity for specific and timely suggestions for improving the methodology or research design. By fostering a more developmental and formative approach to peer review, the process can become a tool for advancing knowledge, rather than simply a gatekeeping mechanism.

    Is this feedback useful?

    Finally, the educational concept of feedforward, which focuses on providing guidance for future actions rather than only critiquing past performance, needs to be applied to peer review too. By applying feedforward principles, reviewers can shift their feedback to be more forward-looking, offering tangible, discrete, and actionable suggestions that help the author improve their work in subsequent revisions.

    In peer review, approaching comments with a feedforward framing may transform feedback into a constructive dialogue that motivates people to make their work better by taking actionable steps, rather than a hostile exchange built upon unclear standards and (often) mismatched expectations.

    So the answers to improving some parts of the peer review process are there. We can, if we’re clever, really improve the fairness, consistency, and developmental value of reviewer comments. Structured assessment criteria, calibration, formative feedback mechanisms, and feedforward approaches are just a few strategies that can enhance the integrity of peer review. The answers are intuitive – but they are not yet standard practice in peer review because we typically don’t approach peer review as pedagogy.

    There are some problems that this won’t fix. Peer review relies on the unpaid labour of time-poor academics in an increasingly precarious academia, which adds challenge to efforts to improve the integrity of the process.

    However, there are steps we can take – we need to now think about how these can be achieved in practice. By clarifying the peer review practice, tightening up the rigour of feedback quality, and applying educational interventions to improve the process, this takes an important step in fixing peer review for the future of research.

    Source link

  • How can students’ module feedback help prepare for success in NSS?

    How can students’ module feedback help prepare for success in NSS?

    Since the dawn of student feedback there’s been a debate about the link between module feedback and the National Student Survey (NSS).

    Some institutions have historically doubled down on the idea that there is a read-across from the module learning experience to the student experience as captured by NSS and treated one as a kind of “dress rehearsal” for the other by asking the NSS questions in module feedback surveys.

    This approach arguably has some merits in that it sears the NSS questions into students’ minds to the point that when they show up in the actual NSS it doesn’t make their brains explode. It also has the benefit of simplicity – there’s no institutional debate about what module feedback should include or who should have control of it. If there isn’t a deep bench of skills in survey design in an institution there could be a case for adopting NSS questions on the grounds they have been carefully developed and exhaustively tested with students. Some NSS questions have sufficient relevance in the module context to do the job, even if there isn’t much nuance there – a generic question about teaching quality or assessment might resonate at both levels, but it can’t tell you much about specific pedagogic innovations or challenges in a particular module.

    However, there are good reasons not to take this “dress rehearsal” approach. NSS endeavours to capture the breadth of the student experience at a very high level, not the specific module experience. It’s debatable whether module feedback should even be trying to measure “experience” – there are other possible approaches, such as focusing on learning gains, or skills development, especially if the goal is to generate actionable feedback data about specific module elements. For both students and academics seeing the same set of questions repeated ad nauseam is really rather boring, and is as likely to create disengagement and alienation from the “experience” construct NSS proposes than a comforting sense of familiarity and predictability.

    But separating out the two feedback mechanisms entirely doesn’t make total sense either. Though the totemic status of NSS has been tempered in recent years it remains strategically important as an annual temperature check, as a nationally comparable dataset, as an indicator of quality for the Teaching Excellence Framework and, unfortunately, as a driver of league table position. Securing consistently good NSS scores, alongside student continuation and employability, will feature in most institutions’ key performance indicators and, while vice chancellors and boards will frequently exercise their critical judgement about what the data is actually telling them, when it comes to the crunch no head of institution or board wants to see their institution slip.

    Module feedback, therefore, offers an important “lead indicator” that can help institutions maximise the likelihood that students have the kind of experience that will prompt them to give positive NSS feedback – indeed, the ability to continually respond and adapt in light of feedback can often be a condition of simply sustaining existing performance. But if simply replicating the NSS questions at module level is not the answer, how can these links best be drawn? Wonkhe and evasys recently convened an exploratory Chatham House discussion with senior managers and leaders from across the sector to gather a range of perspectives on this complex issue. While success in NSS remains part of the picture for assigning value and meaning to module feedback in particular institutional contexts there is a lot else going on as well.

    A question of purpose

    Module feedback can serve multiple purposes, and it’s an open question whether some of those purposes are considered to be legitimate for different institutions. To give some examples, module feedback can:

    • Offer institutional leaders an institution-wide “snapshot” of comparable data that can indicate where there is a need for external intervention to tackle emerging problems in a course, module or department.
    • Test and evaluate the impact of education enhancement initiatives at module, subject or even institution level, or capture progress with implementing systems, policies or strategies
    • Give professional service teams feedback on patterns of student engagement with and opinions on specific provision such as estates, IT, careers or library services
    • Give insight to module leaders about specific pedagogic and curriculum choices and how these were received by students to inform future module design
    • Give students the opportunity to reflect on their own learning journey and engagement
    • Generate evidence of teaching quality that academic staff can use to support promotion or inform fellowship applications
    • Depending on the timing, capture student sentiment and engagement and indicate where students may need additional support or whether something needs to be changed mid-module

    Needless to say, all of these purposes can be legitimate and worthwhile but not all of them can comfortably coexist. Leaders may prioritise comparability of data ie asking the same question across all modules to generate comparable data and generate priorities. Similarly, those operating across an institution may be keen to map patterns and capture differences across subjects – one example offered at the round table was whether students had met with their personal tutor. Such questions may be experienced at department or module level as intrusive and irrelevant to more immediately purposeful questions around students’ learning experience on the module. Module leaders may want to design their own student evaluation questions tailored to inform their pedagogic practice and future iterations of the module.

    There are also a lot of pragmatic and cultural considerations to navigate. Everyone is mindful that students get asked to feed back on their experiences A LOT – sometimes even before they have had much of a chance to actually have an experience. As students’ lives become more complicated, institutions are increasingly wary of the potential for cognitive overload that comes with being constantly asked for feedback. Additionally, institutions need to make their processes of gathering and acting on feedback visible to students so that students can see there is an impact to sharing their views – and will confirm this when asked in the NSS. Some institutions are even building questions that test whether students can see the feedback loop being closed into their student surveys.

    Similarly, there is also a strong appreciation of the need to adopt survey approaches that support and enable staff to take action and adapt their practice in response to feedback, affecting the design of the questions, the timing of the survey, how quickly staff can see the results and the degree to which data is presented in a way that is accessible and digestible. For some, trusting staff to evaluate their modules in the way they see fit is a key tenet of recognising their professionalism and competence – but there is a trade-off in terms of visibility of data institution-wide or even at department or subject level.

    Frameworks and ecosystems

    There are some examples in the sector of mature approaches to linking module evaluation data to NSS – it is possible to take a data-led approach that tests the correlation between particular module evaluation question responses and corresponding NSS question outcomes within particular thematic areas or categories, and builds a data model that proposes informed hypotheses about areas of priority for development or approaches that are most likely to drive NSS improvement. This approach does require strong data analysis capability, which not every institution has access to, but it certainly warrants further exploration where the skills are there. The use of a survey platform like evasys allows for the creation of large module evaluation datasets that could be mapped on to NSS results through business intelligence tools to look for trends and correlations that could indicate areas for further investigation.

    Others take the view that maximising NSS performance is something of a red herring as a goal in and of itself – if the wider student feedback system is working well, then the result should be solid NSS performance, assuming that NSS is basically measuring the right things at a high level. Some go even further and express concern that over-focus on NSS as an indicator of quality can be to the detriment of designing more authentic student voice ecosystems.

    But while thinking in terms of the whole system is clearly going to be more effective than a fragmented approach, given the various considerations and trade-offs discussed it is genuinely challenging for institutions to design such effective ecosystems. There is no “right way” to do it but there is an appetite to move module feedback beyond the simple assessment of what students like or don’t like, or the checking of straightforward hygiene factors, to become a meaningful tool for quality enhancement and pedagogic innovation. There is a sense that rather than drawing direct links between module feedback and NSS outcomes, institutions would value a framework-style approach that is able to accommodate the multiple actors and forms of value that are realised through student voice and feedback systems.

    In the coming academic year Wonkhe and evasys are planning to work with institutional partners on co-developing a framework or toolkit to integrate module feedback systems into wider student success and academic quality strategies – contact us to express interest in being involved.

    This article is published in association with evasys.

    Source link

  • Machine learning technology is transforming how institutions make sense of student feedback

    Machine learning technology is transforming how institutions make sense of student feedback

    Institutions spend a lot of time surveying students for their feedback on their learning experience, but once you have crunched the numbers the hard bit is working out the “why.”

    The qualitative information institutions collect is a goldmine of insight about the sentiments and specific experiences that are driving the headline feedback numbers. When students are especially positive, it helps to know why, to spread that good practice and apply it in different learning contexts. When students score some aspect of their experience negatively, it’s critical to know the exact nature of the perceived gap, omission or injustice so that it can be fixed.

    Any conscientious module leader will run their eye down the student comments in a module feedback survey – but once you start looking across modules to programme or cohort level, or to large-scale surveys like NSS, PRES or PTES, the scale of the qualitative data becomes overwhelming for the naked eye. Even the most conscientious reader will find that bias sets in, as comments that are interesting or unexpected tend to be foregrounded as having greater explanatory power over those that seem run of the mill.

    Traditional coding methods for qualitative data require someone – or ideally more than one person – to manually break down comments into clauses or statements that can be coded for theme and sentiment. It’s robust, but incredibly laborious. For student survey work, where the goal might be to respond to feedback and make improvements at pace, institutions are open that this kind of robust analysis is rarely, if ever, the standard practice. Especially as resources become more constrained, devoting hours to this kind of detailed methodological work is rarely a priority.

    Let me blow your mind

    That is where machine learning technology can genuinely change the game. Student Voice AI was founded by Stuart Grey, an academic at the University of Strathclyde (now working at the University of Glasgow), initially to help analyse student comments for large engineering courses. Working with Advance HE he was able to train the machine learning model on national PTES and PRES datasets. Now, further training the algorithm on NSS data, Student Voice AI offers literally same-day analysis of student comments for NSS results for subscribing institutions.

    Put the words “AI” and “student feedback” in the same sentence and some people’s hackles will immediately rise. So Stuart spends quite a lot of time explaining how the analysis works. The word he uses to describe the version of machine learning Student Voice AI deploys is “supervised learning” – humans manually label categories in datasets and “teach” the machine about sentiment and topic. The larger the available dataset the more examples the machine is exposed to and the more sophisticated it becomes. Through this process Student Voice AI has landed on a discreet number of comment themes and categories for taught students and the same for postgraduate research students that the majority of student comments consistently fall into – trained on and distinctive to UK higher education student data. Stuart adds that the categories can and do evolve:

    “The categories are based on what students are saying, not what we think they might be talking about – or what we’d like them to be talking about. There could be more categories if we wanted them, but it’s about what’s digestible for a normal person.”

    In practice that means that institutions can see a quantitative representation of their student comments, sorted by category and sentiment. You can look at student views of feedback, for example, and see the balance of positive, neutral and negative sentiment, overall, segment it into departments or subject areas, or years of study, then click through to see the relevant comments to see what’s driving that feedback. That’s significantly different from, say, dumping your student comments into a third party generative AI platform (sharing confidential data with a third party while you’re at it) and asking it to summarise. There’s value in the time and effort saved, but also in the removal of individual personal bias, and the potential for aggregation and segmentation for different stakeholders in the system. And it also becomes possible to compare student qualitative feedback across institutions.

    Now, Student Voice AI is partnering with student insight platform evasys to bring machine learning technology to qualitative data collected via the evasys platform. And evasys and Student Voice AI have been commissioned by Advance HE to code and analyse open comments from the 2025 PRES and PTES surveys – creating opportunities to drill down into a national dataset that can be segmented by subject discipline and theme as well as by institution.

    Bruce Johnson, managing director at evasys is enthused about the potential for the technology to drive culture change both in how student feedback is used to inform insight and action across institutions:

    “When you’re thinking about how to create actionable insight from survey data the key question is, to whom? Is it to a module leader? Is it to a programme director of a collection of modules? Is it to a head of department or a pro vice chancellor or the planning or quality teams? All of these are completely different stakeholders who need different ways of looking at the data. And it’s also about how the data is presented – most of my customers want, not only quality of insight, but the ability to harvest that in a visually engaging way.”

    “Coming from higher education it seems obvious to me that different stakeholders have very different uses for student feedback data,” says Stuart Grey. “Those teaching at the coalface are interested in student engagement; at the strategic level the interest is in strategic level interest in trends and sentiment analysis and there are also various stakeholder groups in professional services who never get to see this stuff normally, but we can generate the reports that show them what students are saying about their area. Frequently the data tells them something they knew anyway but it gives them the ammunition to be able to make change.”

    The results are in

    Duncan Berryman, student surveys officer at Queens University Belfast, sums up the value of AI analysis for his small team: “It makes our life a lot easier, and the schools get the data and trends quicker.” Previously schools had been supplied with Excel spreadsheets – and his team were spending a lot of time explaining and working through with colleagues how to make sense of the data on those spreadsheets. Being able to see a straightforward visualisation of student sentiment on the various themes means that, as Duncan observes rather wryly, “if change isn’t happening it’s not just because people don’t know what student surveys are saying.”

    Parama Chaudhury, professor of economics and pro vice provost education (student academic experience) at University College London explains where qualitative data analysis sits in the wider ecosystem for quality enhancement of teaching and learning. In her view, for enhancement purposes, comparing your quantitative student feedback scores to those of another department is not particularly useful – essentially it’s comparing apples with oranges. Yet the apparent ease of comparability of quantitative data, compared with the sense of overwhelm at the volume and complexity of student comments, can mean that people spend time trying to explain the numerical differences, rather than mining the qualitative data for more robust and actionable explanations that can give context to your own scores.

    It’s not that people weren’t working hard on enhancement, in other words, but they didn’t always have the best possible information to guide that work. “When I came into this role quite a lot of people were saying ‘we don’t understand why the qualitative data is telling us this, we’ve done all these things,’” says Parama. “I’ve been in the sector a long time and have received my share of summaries of module evaluations and have always questioned those summaries because it’s just someone’s ‘read.’ Having that really objective view, from a well-trained algorithm makes a difference.”

    UCL has tested two-page summaries of student comments to specific departments this academic year, and plans to roll out a version for every department this summer. The data is not assessed in a vacuum; it forms part of the wider institutional quality assurance and enhancement processes which includes data on a range of different perspectives on areas for development. Encouragingly, so far the data from students is consistent with what has emerged from internal reviews, giving the departments that have had the opportunity to engage with it greater confidence in their processes and action plans.

    None of this stops anyone from going and looking at specific student comments, sense-checking the algorithm’s analysis and/or triangulating against other data. At the University of Edinburgh, head of academic planning Marianne Brown says that the value of the AI analysis is in the speed of turnaround – the institutionl carries out a manual reviewing process to be sure that any unexpected comments are picked up. But being able to share the headline insight at pace (in this case via a PowerBI interface) means that leaders receive the feedback while the information is still fresh, and the lead time to effect change is longer than if time had been lost to manual coding.

    The University of Edinburgh is known for its cutting edge AI research, and boasts the Edinburgh (access to) Language Models (ELM) a platform that gives staff and students access to generative AI tools without sharing data with third parties, keeping all user data onsite and secured. Marianne is clear that even a closed system like ELM is not appropriate for unfettered student comment analysis. Generative AI platforms offer the illusion of a thematic analysis but it is far from robust because generative AI operates through sophisticated guesswork rather than analysis of the implications of actual data. “Being able to put responses from NSS or our internal student survey into ELM to give summaries was great, until you started to interrogate those summaries. Robust validation of any output is still required,” says Marianne. Similarly Duncan Berryman observes: “If you asked a gen-AI tool to show you the comments related to the themes it had picked out, it would not refer back to actual comments. Or it would have pulled this supposed common theme from just one comment.”

    The holy grail of student survey practice is creating a virtuous circle: student engagement in feedback creates actionable data, which leads to education enhancement, and students gain confidence that the process is authentic and are further motivated to share their feedback. In that quest, AI, deployed appropriately, can be an institutional ally and resource-multiplier, giving fast and robust access to aggregated student views and opinions. “The end result should be to make teaching and learning better,” says Stuart Grey. “And hopefully what we’re doing is saving time on the manual boring part, and freeing up time to make real change.”

    Source link

  • In defence of university halls of residence

    In defence of university halls of residence

    During my five years living alongside 340 undergraduate students as a hall warden, I have become a firm believer that residential halls are powerful civic learning environments.

    This realisation did not come immediately; if anything, I saw my role as strictly pastoral rather than having any connections to learning and teaching.

    At first glance, the role of a warden has little to do with learning. The term, warden, is an outdated and often confusing title (we are in the process of changing it) to describe a staff member responsible for responding to high-level mental health and disciplinary matters and occasional residential life events.

    I initially approached my role with misplaced enthusiasm, intervening in all manner of student conflicts, leaving little room for their own responsibility. Finding a middle ground between complete non-intervention and excessive control proved a real challenge.

    Over time, I came to understand that effective support meant creating space for disagreement and face-to-face conflict resolution rather than solving problems on students’ behalf.

    Too shy shy

    When I first started, the complaints I received usually came because a student came to my front of house colleagues to alert them of their problem. Whereas now they arrive electronically through e-mail.

    It makes sense. It’s easier, quicker and also means students who may not be around during my formal working hours can make me aware of any issues.

    But sometimes the multiple reports I receive overnight detail seemingly minor problems like a roommate not turning off lights or leaving a window open.

    I think the ease with which students can complain, especially virtually, prevents students from developing crucial conflict resolution skills. Part of living amongst other people is learning to address disagreements. It’s not easy and it’s certainly not comfortable, but it does help you grow as a person.

    It forces you to connect with others you may not agree with – either because of various socio-economic backgrounds, religious views or with those who have different ideas of cleanliness from you.

    I have witnessed meaningful connections form across religious and gender identities, and social classes within student halls. For example, the son of a billionaire bonding with a flatmate who had spent summers as an agricultural labourer in fields in Lincolnshire. Two people who likely would not have crossed paths if they had not chosen to study at the same university.

    I’ve seen interfaith events attended by those with differing faiths or none at all leading to genuine friendships.

    These interactions lack formal learning objectives or assessment metrics, yet provide education that our lecture halls struggle to deliver. Providing the literal space for students to meet helps them develop social capital they cannot necessarily get in a classroom.

    Learning from home

    As a sector, we could do more to analyse and report on the civic benefits offered by halls of residence, and we are beginning to do this work at LSE.

    Most UK university halls operate under an outdated property management model, functioning more like luxury hotels than educational spaces. Some private accommodation companies have introduced luxury facilities where students from wealthy families isolate themselves in environments featuring swimming pools and designer furnishings. While aesthetically impressive, these spaces lack genuine community or learning opportunities.

    These approaches miss a crucial opportunity. Residence halls are sites of learning graduate skills, just as much as the formal classroom. Future employers want complex problem-solving and collaboration skill but the added value of being able to resolve conflict well lies beyond career preparation.

    Holding space

    In my view, modern universities have moved away from an integrated educational vision, focusing primarily on specialised knowledge instead. This fragmentation leaves students ill-equipped to interrogate complex questions and self-discovery.

    Part of this includes being able to navigate conflict constructively and understanding how to create community across differences.

    Residence halls provide spaces where intellectual, ethical, social, and practical dimensions of education can be reintegrated. Abstract concepts from seminars become concrete realities when negotiating shared living. Moral and civic education requires practical engagement with substantive questions about the common good.

    Living amongst peers is a way of acknowledging higher education as a collective endeavour rather than a timetable of classes and lectures.

    Is this overthinking spaces that should prioritise fun and exploration? I don’t think so. Our halls of residence aren’t peripheral to education. Properly reconceived, they could become central to what makes university education distinctive and valuable as higher education confronts an uncertain future.

    Source link