Category: student surveys

  • How can students’ module feedback help prepare for success in NSS?

    How can students’ module feedback help prepare for success in NSS?

    Since the dawn of student feedback there’s been a debate about the link between module feedback and the National Student Survey (NSS).

    Some institutions have historically doubled down on the idea that there is a read-across from the module learning experience to the student experience as captured by NSS and treated one as a kind of “dress rehearsal” for the other by asking the NSS questions in module feedback surveys.

    This approach arguably has some merits in that it sears the NSS questions into students’ minds to the point that when they show up in the actual NSS it doesn’t make their brains explode. It also has the benefit of simplicity – there’s no institutional debate about what module feedback should include or who should have control of it. If there isn’t a deep bench of skills in survey design in an institution there could be a case for adopting NSS questions on the grounds they have been carefully developed and exhaustively tested with students. Some NSS questions have sufficient relevance in the module context to do the job, even if there isn’t much nuance there – a generic question about teaching quality or assessment might resonate at both levels, but it can’t tell you much about specific pedagogic innovations or challenges in a particular module.

    However, there are good reasons not to take this “dress rehearsal” approach. NSS endeavours to capture the breadth of the student experience at a very high level, not the specific module experience. It’s debatable whether module feedback should even be trying to measure “experience” – there are other possible approaches, such as focusing on learning gains, or skills development, especially if the goal is to generate actionable feedback data about specific module elements. For both students and academics seeing the same set of questions repeated ad nauseam is really rather boring, and is as likely to create disengagement and alienation from the “experience” construct NSS proposes than a comforting sense of familiarity and predictability.

    But separating out the two feedback mechanisms entirely doesn’t make total sense either. Though the totemic status of NSS has been tempered in recent years it remains strategically important as an annual temperature check, as a nationally comparable dataset, as an indicator of quality for the Teaching Excellence Framework and, unfortunately, as a driver of league table position. Securing consistently good NSS scores, alongside student continuation and employability, will feature in most institutions’ key performance indicators and, while vice chancellors and boards will frequently exercise their critical judgement about what the data is actually telling them, when it comes to the crunch no head of institution or board wants to see their institution slip.

    Module feedback, therefore, offers an important “lead indicator” that can help institutions maximise the likelihood that students have the kind of experience that will prompt them to give positive NSS feedback – indeed, the ability to continually respond and adapt in light of feedback can often be a condition of simply sustaining existing performance. But if simply replicating the NSS questions at module level is not the answer, how can these links best be drawn? Wonkhe and evasys recently convened an exploratory Chatham House discussion with senior managers and leaders from across the sector to gather a range of perspectives on this complex issue. While success in NSS remains part of the picture for assigning value and meaning to module feedback in particular institutional contexts there is a lot else going on as well.

    A question of purpose

    Module feedback can serve multiple purposes, and it’s an open question whether some of those purposes are considered to be legitimate for different institutions. To give some examples, module feedback can:

    • Offer institutional leaders an institution-wide “snapshot” of comparable data that can indicate where there is a need for external intervention to tackle emerging problems in a course, module or department.
    • Test and evaluate the impact of education enhancement initiatives at module, subject or even institution level, or capture progress with implementing systems, policies or strategies
    • Give professional service teams feedback on patterns of student engagement with and opinions on specific provision such as estates, IT, careers or library services
    • Give insight to module leaders about specific pedagogic and curriculum choices and how these were received by students to inform future module design
    • Give students the opportunity to reflect on their own learning journey and engagement
    • Generate evidence of teaching quality that academic staff can use to support promotion or inform fellowship applications
    • Depending on the timing, capture student sentiment and engagement and indicate where students may need additional support or whether something needs to be changed mid-module

    Needless to say, all of these purposes can be legitimate and worthwhile but not all of them can comfortably coexist. Leaders may prioritise comparability of data ie asking the same question across all modules to generate comparable data and generate priorities. Similarly, those operating across an institution may be keen to map patterns and capture differences across subjects – one example offered at the round table was whether students had met with their personal tutor. Such questions may be experienced at department or module level as intrusive and irrelevant to more immediately purposeful questions around students’ learning experience on the module. Module leaders may want to design their own student evaluation questions tailored to inform their pedagogic practice and future iterations of the module.

    There are also a lot of pragmatic and cultural considerations to navigate. Everyone is mindful that students get asked to feed back on their experiences A LOT – sometimes even before they have had much of a chance to actually have an experience. As students’ lives become more complicated, institutions are increasingly wary of the potential for cognitive overload that comes with being constantly asked for feedback. Additionally, institutions need to make their processes of gathering and acting on feedback visible to students so that students can see there is an impact to sharing their views – and will confirm this when asked in the NSS. Some institutions are even building questions that test whether students can see the feedback loop being closed into their student surveys.

    Similarly, there is also a strong appreciation of the need to adopt survey approaches that support and enable staff to take action and adapt their practice in response to feedback, affecting the design of the questions, the timing of the survey, how quickly staff can see the results and the degree to which data is presented in a way that is accessible and digestible. For some, trusting staff to evaluate their modules in the way they see fit is a key tenet of recognising their professionalism and competence – but there is a trade-off in terms of visibility of data institution-wide or even at department or subject level.

    Frameworks and ecosystems

    There are some examples in the sector of mature approaches to linking module evaluation data to NSS – it is possible to take a data-led approach that tests the correlation between particular module evaluation question responses and corresponding NSS question outcomes within particular thematic areas or categories, and builds a data model that proposes informed hypotheses about areas of priority for development or approaches that are most likely to drive NSS improvement. This approach does require strong data analysis capability, which not every institution has access to, but it certainly warrants further exploration where the skills are there. The use of a survey platform like evasys allows for the creation of large module evaluation datasets that could be mapped on to NSS results through business intelligence tools to look for trends and correlations that could indicate areas for further investigation.

    Others take the view that maximising NSS performance is something of a red herring as a goal in and of itself – if the wider student feedback system is working well, then the result should be solid NSS performance, assuming that NSS is basically measuring the right things at a high level. Some go even further and express concern that over-focus on NSS as an indicator of quality can be to the detriment of designing more authentic student voice ecosystems.

    But while thinking in terms of the whole system is clearly going to be more effective than a fragmented approach, given the various considerations and trade-offs discussed it is genuinely challenging for institutions to design such effective ecosystems. There is no “right way” to do it but there is an appetite to move module feedback beyond the simple assessment of what students like or don’t like, or the checking of straightforward hygiene factors, to become a meaningful tool for quality enhancement and pedagogic innovation. There is a sense that rather than drawing direct links between module feedback and NSS outcomes, institutions would value a framework-style approach that is able to accommodate the multiple actors and forms of value that are realised through student voice and feedback systems.

    In the coming academic year Wonkhe and evasys are planning to work with institutional partners on co-developing a framework or toolkit to integrate module feedback systems into wider student success and academic quality strategies – contact us to express interest in being involved.

    This article is published in association with evasys.

    Source link

  • The power of pre-arrival student questionnaires

    The power of pre-arrival student questionnaires

    If you knew more about your incoming student body what would you do to change your pre-arrival, arrival and orientation, and induction to study practices?

    For example, if you knew that only 30 per cent of your incoming undergraduate students had experience of accessing learning materials in a school/college library, what library resource sessions would be provided on entry? Lack of library experience is exacerbated by the fact that since 2010, over 800 public libraries have closed in the UK.

    If the course and IT team knew that over one-third of new postgraduate taught students had limited or no experience of using a virtual learning environment, what enhanced onboarding approach could be adopted?

    If you knew that 12.4 per cent of undergraduate and 13.5 per cent of postgraduate taught students decided to study at a university closer to home due to the cost of living crisis, what teaching delivery pattern and support would you put in place for students who have a long commute?

    If you knew for 43.7 per cent of UG and 45.4 per cent of PGT respondents, their attendance in their last final year of study was 80% or below due to 34.6 per cent of undergraduates and 25.0 per cent of postgraduate taught students experiencing mental health and wellbeing issues, what support would you put in place?

    And if you knew that at undergraduate level, male respondents stated they were three times more likely to use sports facilities compared to mental health services, how could you promote mental health and wellbeing through sports?

    All of these examples are taken from previous iterations of pre-arrival questionnaires (PAQs), run at various universities around the UK.

    A lack of knowledge

    As a rule we know very little about the prior learning experiences, concerns, worries, and expectations of university study of our incoming students. It is an area where limited work has been undertaken, and yet it is such a critical one if we are to effectively bridge the transition from secondary to tertiary education.

    We have no idea about the different experiences of our incoming students by student characteristics, by region, or by type of institution. If we did, would institutions continue to be weighed, measured and judged in the same way as is currently the case?

    Through my (Michelle’s) own learning journey as a mature, working-class, mixed-race female whose parents had no educational aspirations for me, when I finally went to do a degree at a polytechnic, I struggled to get the support I needed especially in terms of learning how to learn again after a five-year study break.

    I was treated exactly the same as my 18-year-old classmates who had come straight from school. Assumptions were made that I should know and remember how to learn, and this was made very clear in negative feedback . But as we know, learning at school and college is different to university, and if you have been out of education for a while it can be a daunting experience reengaging with how to learn.

    In the various roles I have undertaken and through the creation of my whole university integrated student experience model (SET model), I recognised that to enable effective change to happen not only in the learning sphere but also the support one, we needed data to understand where and how to make change. So over 20 years ago, I started creating and undertaking pre-arrival academic questionnaires (PAQ) at undergraduate and postgraduate taught level to get insight into different prior learning experiences and how these may impact on concerns, worries and expectations of higher education.

    Purpose of the PAQ

    NSS metrics are informative but it is only a snapshot of the university experience of those that made it nearly to the end of their degree. It does not reflect the voice of incoming students, and it does not provide any real time indication of what kind of support new students need.

    The PAQ (formerly called the “entry to study survey”) is a powerful tool. Results can challenge change the assumptions of staff and university leaders, in terms of what they think they know about their incoming students. As with the postgraduate taught and postgraduate research experience surveys (PTES and PRES), the questions evolve to take into account of a changing environment, and the impact it has on our students (including things like Covid-19 and the cost of living crisis).

    The PAQ also provides a meaningful course activity early on. It gets students to reflect on their learning, both on their past learning journey and expectations of university study. Students answer a range of questions across six sections that cover prior learning experiences, concerns on entry, how they expect to study at university, identifying what they see as their priorities in the coming year, their strengths and weaknesses, and expected university study outcomes. As it is delivered as a course activity, students engage with it.

    Within three weeks of the PAQ survey closing, students get the headline findings along with relevant support and advice. This shows them that they are not alone regarding prior learning experiences, any concerns or worries they may have, and they know that their voice has been listened to.

    The information gleaned from the PAQ helps inform every area of a university’s work from Access and Participation Plans to recruitment, orientation and induction to study to policy and support.

    A national pilot

    In September 2025, AdvanceHE and Jisc, funded by the Office for Students will commence the first of two annual waves of a national pilot in England, using the UG and PGT PAQ work I have undertaken at the University of East London and other institutions The aims and objectives include:

    • To establish consistency in how the sector collects and acts upon information from students upon arrival around their learning styles, expectations, challenges and requirements.
    • To drive dedicated activity at the local level to close the gap between expectations, requirements and the actual experience upon arrival.
    • To provide robust data-led evidence to enable institutions to address inconsistencies in how different groups of students (for example by social background, qualification type, geography and demographics) begin their learning and develop a platform to progress to good outcomes.
    • To create a fuller understanding across the sector of the Pre-arrival experience, providing evidence for wider policy making and cross-sector activity.
    • To support providers in delivering a range of practical outcomes across different student groups, including improved wellbeing and belonging, improved continuation and attainment. Earlier and preventative intervention should further contribute to higher progression to further study or employment.

    The questions in the PAQ contribute valuable insights and knowledge that align with the themes in the University Mental Health Charter.

    How can you get involved in the National PAQ Pilot

    Participating is free of charge (although a Jisc Online Surveys licence is required). As a benefit of participation, participants will receive fast turnaround results, detailed benchmarking reports, resources to boost participation and an invitation to an end-of-cycle dissemination conference.

    In return for free participation, institutions are asked to proactively distribute and promote their survey at course level, drive transformation activity on the back of the results and develop a case study for each year of participation.

    We are currently welcoming expressions of interest as we look to confirm participation with a representative sample of 20-30 institutions in each year of the pilot. Please complete the survey form with your expression of interest by the end of April 2025, and we will be in touch soon.

    To raise specific questions or to set up a dedicated discussion please contact jonathan.neves@advance-he.ac.uk or M.Morgan@uel.ac.uk.

    Source link