Category: module feedback

  • AI is unlocking insights from PTES to drive enhancement of the PGT experience faster than ever before

    AI is unlocking insights from PTES to drive enhancement of the PGT experience faster than ever before

    If, like me, you grew up watching Looney Tunes cartoons, you may remember Yosemite Sam’s popular phrase, “There’s gold in them thar hills.”

    In surveys, as in gold mining, the greatest riches are often hidden and difficult to extract. This principle is perhaps especially true when institutions are seeking to enhance the postgraduate taught (PGT) student experience.

    PGT students are far more than an extension of the undergraduate community; they represent a crucial, diverse and financially significant segment of the student body. Yet, despite their growing numbers and increasing strategic importance, PGT students, as Kelly Edmunds and Kate Strudwick have recently pointed out on Wonkhe, remain largely invisible in both published research and core institutional strategy.

    Advance HE’s Postgraduate Taught Experience Survey (PTES) is therefore one of the few critical insights we have about the PGT experience. But while the quantitative results offer a (usually fairly consistent) high-level view, the real intelligence required to drive meaningful enhancement inside higher education institutions is buried deep within the thousands of open-text comments collected. Faced with the sheer volume of data the choice is between eye-ball scanning and the inevitable introduction of human bias, or laborious and time-consuming manual coding. The challenge for the institutions participating in PTES this year isn’t the lack of data: it’s efficiently and reliably turning that dense, often contradictory, qualitative data into actionable, ethical, and equitable insights.

    AI to the rescue

    The application of machine learning AI technology to analysis of qualitative student survey data presents us with a generational opportunity to amplify the student voice. The critical question is not whether AI should be used, but how to ensure its use meets robust and ethical standards. For that you need the right process – and the right partner – to prioritise analytical substance, comprehensiveness, and sector-specific nuance.

    UK HE training is non-negotiable. AI models must be deeply trained on a vast corpus of UK HE student comments. Without this sector-specific training, analysis will fail to accurately interpret the nuances of student language, sector jargon, and UK-specific feedback patterns.

    Analysis must rely on a categorisation structure that has been developed and refined against multiple years of PTES data. This continuity ensures that the thematic framework reflects the nuances of the PGT experience.

    To drive targeted enhancement, the model must break down feedback into highly granular sub-themes – moving far beyond simplistic buckets – ensuring staff can pinpoint the exact issue, whether it falls under learning resources, assessment feedback, or thesis supervision.

    The analysis must be more than a static report. It must be delivered through integrated dashboard solutions that allow institutions to filter, drill down, and cross-reference the qualitative findings with demographic and discipline data. Only this level of flexibility enables staff to take equitable and targeted enhancement actions across their diverse PGT cohorts.

    When these principles are prioritised, the result is an analytical framework specifically designed to meet the rigour and complexity required by the sector.

    The partnership between Advance HE, evasys, and Student Voice AI, which analysed this year’s PTES data, demonstrates what is possible when these rigorous standards are prioritised. We have offered participating institutions a comprehensive service that analyses open comments alongside the detailed benchmarking reports that Advance HE already provides. This collaboration has successfully built an analytical framework that exemplifies how sector-trained AI can deliver high-confidence, actionable intelligence.

    Jonathan Neves, Head of Research and Surveys, Advance HE calls our solution “customised, transparent and genuinely focused on improving the student experience, “ and adds, “We’re particularly impressed by how they present the data visually and look forward to seeing results from using these specialised tools in tandem.”

    Substance uber alles

    The commitment to analytical substance is paramount; without it, the risk to institutional resources and equity is severe. If institutions are to derive value, the analysis must be comprehensive. When the analysis lacks this depth institutional resources are wasted acting on partial or misleading evidence.

    Rigorous analysis requires minimising what we call data leakage: the systematic failure to capture or categorise substantive feedback. Consider the alternative: when large percentages of feedback are ignored or left uncategorised, institutions are effectively muting a significant portion of the student voice. Or when a third of the remaining data is lumped into meaningless buckets like “other,” staff are left without actionable insight, forced to manually review thousands of comments to find the true issues.

    This is the point where the qualitative data, intended to unlock enhancement, becomes unusable for quality assurance. The result is not just a flawed report, but the failure to deliver equitable enhancement for the cohorts whose voices were lost in the analytical noise.

    Reliable, comprehensive processing is just the first step. The ultimate goal of AI analysis should be to deliver intelligence in a format that seamlessly integrates into strategic workflows. While impressive interfaces are visually appealing, genuine substance comes from the capacity to produce accurate, sector-relevant outputs. Institutions must be wary of solutions that offer a polished facade but deliver compromised analysis. Generic generative AI platforms, for example, offer the illusion of thematic analysis but are not robust.

    But robust validation of any output is still required. This is the danger of smoke and mirrors – attractive dashboards that simply mask a high degree of data leakage, where large volumes of valuable feedback are ignored, miscategorised or rendered unusable by failing to assign sentiment.

    Dig deep, act fast

    When institutions choose rigour, the outcomes are fundamentally different, built on a foundation of confidence. Analysis ensures that virtually every substantive PGT comment is allocated to one or more UK-derived categories, providing a clear thematic structure for enhancement planning.

    Every comment with substance is assigned both positive and negative sentiment, providing staff with the full, nuanced picture needed to build strategies that leverage strengths while addressing weaknesses.

    This shift from raw data to actionable intelligence allows institutions to move quickly from insight to action. As Parama Chaudhury, Pro-Vice Provost (Education – Student Academic Experience) at UCL noted, the speed and quality of this approach “really helped us to get the qualitative results alongside the quantitative ones and encourage departmental colleagues to use the two in conjunction to start their work on quality enhancement.”

    The capacity to produce accurate, sector-relevant outputs, driven by rigorous processing, is what truly unlocks strategic value. Converting complex data tables into readable narrative summaries for each theme allows academic and professional services leaders alike to immediately grasp the findings and move to action. The ability to access categorised data via flexible dashboards and in exportable formats ensures the analysis is useful for every level of institutional planning, from the department to the executive team. And providing sector benchmark reports allows institutions to understand their performance relative to peers, turning internal data into external intelligence.

    The postgraduate taught experience is a critical pillar of UK higher education. The PTES data confirms the challenge, but the true opportunity lies in how institutions choose to interpret the wealth of student feedback they receive. The sheer volume of PGT feedback combined with the ethical imperative to deliver equitable enhancement for all students demands analytical rigour that is complete, nuanced, and sector-specific.

    This means shifting the focus from simply collecting data to intelligently translating the student voice into strategic priorities. When institutions insist on this level of analytical integrity, they move past the risk of smoke and mirrors and gain the confidence to act fast and decisively.

    It turns out Yosemite Sam was right all along: there’s gold in them thar hills. But finding it requires more than just a map; it requires the right analytical tools and rigour to finally extract that valuable resource and forge it into meaningful institutional change.

    This article is published in association with evasys. evasys and Student Voice AI are offering no-cost advanced analysis of NSS open comments delivering comprehensive categorisation and sentiment analysis, secure dashboard to view results and a sector benchmark report. Click here to find out more and request your free analysis.

    Source link

  • How can students’ module feedback help prepare for success in NSS?

    How can students’ module feedback help prepare for success in NSS?

    Since the dawn of student feedback there’s been a debate about the link between module feedback and the National Student Survey (NSS).

    Some institutions have historically doubled down on the idea that there is a read-across from the module learning experience to the student experience as captured by NSS and treated one as a kind of “dress rehearsal” for the other by asking the NSS questions in module feedback surveys.

    This approach arguably has some merits in that it sears the NSS questions into students’ minds to the point that when they show up in the actual NSS it doesn’t make their brains explode. It also has the benefit of simplicity – there’s no institutional debate about what module feedback should include or who should have control of it. If there isn’t a deep bench of skills in survey design in an institution there could be a case for adopting NSS questions on the grounds they have been carefully developed and exhaustively tested with students. Some NSS questions have sufficient relevance in the module context to do the job, even if there isn’t much nuance there – a generic question about teaching quality or assessment might resonate at both levels, but it can’t tell you much about specific pedagogic innovations or challenges in a particular module.

    However, there are good reasons not to take this “dress rehearsal” approach. NSS endeavours to capture the breadth of the student experience at a very high level, not the specific module experience. It’s debatable whether module feedback should even be trying to measure “experience” – there are other possible approaches, such as focusing on learning gains, or skills development, especially if the goal is to generate actionable feedback data about specific module elements. For both students and academics seeing the same set of questions repeated ad nauseam is really rather boring, and is as likely to create disengagement and alienation from the “experience” construct NSS proposes than a comforting sense of familiarity and predictability.

    But separating out the two feedback mechanisms entirely doesn’t make total sense either. Though the totemic status of NSS has been tempered in recent years it remains strategically important as an annual temperature check, as a nationally comparable dataset, as an indicator of quality for the Teaching Excellence Framework and, unfortunately, as a driver of league table position. Securing consistently good NSS scores, alongside student continuation and employability, will feature in most institutions’ key performance indicators and, while vice chancellors and boards will frequently exercise their critical judgement about what the data is actually telling them, when it comes to the crunch no head of institution or board wants to see their institution slip.

    Module feedback, therefore, offers an important “lead indicator” that can help institutions maximise the likelihood that students have the kind of experience that will prompt them to give positive NSS feedback – indeed, the ability to continually respond and adapt in light of feedback can often be a condition of simply sustaining existing performance. But if simply replicating the NSS questions at module level is not the answer, how can these links best be drawn? Wonkhe and evasys recently convened an exploratory Chatham House discussion with senior managers and leaders from across the sector to gather a range of perspectives on this complex issue. While success in NSS remains part of the picture for assigning value and meaning to module feedback in particular institutional contexts there is a lot else going on as well.

    A question of purpose

    Module feedback can serve multiple purposes, and it’s an open question whether some of those purposes are considered to be legitimate for different institutions. To give some examples, module feedback can:

    • Offer institutional leaders an institution-wide “snapshot” of comparable data that can indicate where there is a need for external intervention to tackle emerging problems in a course, module or department.
    • Test and evaluate the impact of education enhancement initiatives at module, subject or even institution level, or capture progress with implementing systems, policies or strategies
    • Give professional service teams feedback on patterns of student engagement with and opinions on specific provision such as estates, IT, careers or library services
    • Give insight to module leaders about specific pedagogic and curriculum choices and how these were received by students to inform future module design
    • Give students the opportunity to reflect on their own learning journey and engagement
    • Generate evidence of teaching quality that academic staff can use to support promotion or inform fellowship applications
    • Depending on the timing, capture student sentiment and engagement and indicate where students may need additional support or whether something needs to be changed mid-module

    Needless to say, all of these purposes can be legitimate and worthwhile but not all of them can comfortably coexist. Leaders may prioritise comparability of data ie asking the same question across all modules to generate comparable data and generate priorities. Similarly, those operating across an institution may be keen to map patterns and capture differences across subjects – one example offered at the round table was whether students had met with their personal tutor. Such questions may be experienced at department or module level as intrusive and irrelevant to more immediately purposeful questions around students’ learning experience on the module. Module leaders may want to design their own student evaluation questions tailored to inform their pedagogic practice and future iterations of the module.

    There are also a lot of pragmatic and cultural considerations to navigate. Everyone is mindful that students get asked to feed back on their experiences A LOT – sometimes even before they have had much of a chance to actually have an experience. As students’ lives become more complicated, institutions are increasingly wary of the potential for cognitive overload that comes with being constantly asked for feedback. Additionally, institutions need to make their processes of gathering and acting on feedback visible to students so that students can see there is an impact to sharing their views – and will confirm this when asked in the NSS. Some institutions are even building questions that test whether students can see the feedback loop being closed into their student surveys.

    Similarly, there is also a strong appreciation of the need to adopt survey approaches that support and enable staff to take action and adapt their practice in response to feedback, affecting the design of the questions, the timing of the survey, how quickly staff can see the results and the degree to which data is presented in a way that is accessible and digestible. For some, trusting staff to evaluate their modules in the way they see fit is a key tenet of recognising their professionalism and competence – but there is a trade-off in terms of visibility of data institution-wide or even at department or subject level.

    Frameworks and ecosystems

    There are some examples in the sector of mature approaches to linking module evaluation data to NSS – it is possible to take a data-led approach that tests the correlation between particular module evaluation question responses and corresponding NSS question outcomes within particular thematic areas or categories, and builds a data model that proposes informed hypotheses about areas of priority for development or approaches that are most likely to drive NSS improvement. This approach does require strong data analysis capability, which not every institution has access to, but it certainly warrants further exploration where the skills are there. The use of a survey platform like evasys allows for the creation of large module evaluation datasets that could be mapped on to NSS results through business intelligence tools to look for trends and correlations that could indicate areas for further investigation.

    Others take the view that maximising NSS performance is something of a red herring as a goal in and of itself – if the wider student feedback system is working well, then the result should be solid NSS performance, assuming that NSS is basically measuring the right things at a high level. Some go even further and express concern that over-focus on NSS as an indicator of quality can be to the detriment of designing more authentic student voice ecosystems.

    But while thinking in terms of the whole system is clearly going to be more effective than a fragmented approach, given the various considerations and trade-offs discussed it is genuinely challenging for institutions to design such effective ecosystems. There is no “right way” to do it but there is an appetite to move module feedback beyond the simple assessment of what students like or don’t like, or the checking of straightforward hygiene factors, to become a meaningful tool for quality enhancement and pedagogic innovation. There is a sense that rather than drawing direct links between module feedback and NSS outcomes, institutions would value a framework-style approach that is able to accommodate the multiple actors and forms of value that are realised through student voice and feedback systems.

    In the coming academic year Wonkhe and evasys are planning to work with institutional partners on co-developing a framework or toolkit to integrate module feedback systems into wider student success and academic quality strategies – contact us to express interest in being involved.

    This article is published in association with evasys.

    Source link