Tag: REF

  • Generative AI and the REF: closing the gap between policy and practice

    Generative AI and the REF: closing the gap between policy and practice

    This blog was kindly authored by Liam Earney, Managing Director, HE and Research, Jisc.

    The REF-AI report, which received funding from Research England and co-authored by Jisc and Centre for Higher Education Transformations (CHET), was designed to provide evidence to help the sector prepare for the next REF. Its findings show that Generative AI is already shaping the approaches that universities adopt. Some approaches are cautious and exploratory, some are inventive and innovative, and most of it is happening quietly in the background. GenAI in research practice is no longer theoretical; it is part of the day-to-day reality of research, and research assessment.

    For Jisc, some of the findings in the report are unsurprising. We see every day how digital capability is uneven across the sector, and how new tools arrive before governance has had a chance to catch up. The report highlights an important gap between emerging practice and policy – a gap that the sector can now work collaboratively to close. UKRI has already issued guidance on generative AI use in funding applications and assessment: emphasising honesty, rigour, transparency, and confidentiality. Yet the REF context still lacks equivalent clarity, leaving institutions to interpret best practice alone. This work was funded by Research England to inform future guidance and support, ensuring that the sector has the evidence it needs to navigate GenAI responsibly.

    The REF-AI report rightly places integrity at the heart of its recommendations. Recommendation 1 is critical to support transparency and avoid misunderstandings: every university should publish a clear policy on using Generative AI in research, and specifically in REF work. That policy should outline what is acceptable and require staff to disclose when AI has helped shape a submission.

    This is about trust and about laying the groundwork for a fair assessment system. At present, too much GenAI use is happening under the radar, without shared language or common expectations. Clarity and consistency will help maintain trust in an exercise that underpins the distribution of public research funding.

    Unpicking a patchwork of inconsistencies

    We now have insight into real practice across UK universities. Some are already using GenAI to trawl for impact evidence, to help shape narratives, and even to review or score outputs. Others are experimenting with bespoke tools or home-grown systems designed to streamline their internal processes.

    This kind of activity is usually driven by good intentions. Teams are trying to cope with rising workloads and the increased complexity that comes with each REF cycle. But when different institutions use different tools in different ways, the result is not greater clarity. It is a patchwork of inconsistent practices and a risk that those involved do not clearly understand the role GenAI has played.

    The report notes that most universities still lack formal guidance and that internal policy discussions are only just beginning. In fact, practice has moved so far ahead of governance that many colleagues are unaware of how much GenAI is already embedded in their own institution’s REF preparation, or for professional services, how much GenAI is already being used by their researchers.

    The sector digital divide

    This is where the sector can work together, with support from Jisc and others, to help narrow the divide that exists. The survey results tell us that many academics are deeply sceptical of GenAI in almost every part of the REF. Strong disagreement is common and, in some areas, reaches seventy per cent or more. Only a small minority sees value in GenAI for developing impact case studies.

    In contrast, interviews with senior leaders reveal a growing sense that institutions cannot afford to ignore this technology. Several Pro Vice Chancellors told us that GenAI is here to stay and that the sector has a responsibility to work out how to use it safely and responsibly.

    This tension is familiar to Jisc. GenAI literacy is uneven, as is confidence, and even general digital capability. Our role is to help universities navigate that unevenness. In learning and teaching, this need is well understood, with our AI literacy programme for teaching staff well established. The REF AI findings make clear that similar support will be needed for research staff.

    Why national action matters

    If we leave GenAI use entirely to local experimentation, we will widen the digital divide between those who can invest in bespoke tools and those who cannot. The extent to which institutions can benefit from GenAI is tightly bound to their resources and existing expertise. A national research assessment exercise cannot afford to leave that unaddressed.

    We also need to address research integrity, and that should be the foundation for anything we do next. If the sector wants a safe and fair path forward, then transparency must come first. That is why Recommendation 1 matters. The report suggests universities should consider steps such as:

    • define where GenAI can and cannot be used
    • require disclosure of GenAI involvement in REF related work
    • embed these decisions into their broader research integrity and ethics frameworks

    As the report notes that current thinking about GenAI rarely connects with responsible research assessment initiatives such as DORA or CoARA, that gap has to close.

    Creating the conditions for innovation

    These steps do not limit innovation; they make innovation possible in a responsible way. At Jisc we already hear from institutions looking for advice on secure, trustworthy GenAI environments. They want support that will enable experimentation without compromising data protection, confidentiality or research ethics. They want clarity on how to balance efficiency gains with academic oversight. And they want to avoid replicating the mistakes of early digital adoption, where local solutions grew faster than shared standards.

    The REF AI report gives the sector the evidence it needs to move from informal practice to a clear, managed approach.

    The next REF will arrive at a time of major financial strain and major technological change. GenAI can help reduce burden and improve consistency, but only if it is used transparently and with a shared commitment to integrity. With the right safeguards, GenAI could support fairness in the assessment of UK research.

    From Jisc’s perspective, this is the moment to work together. Universities need policies. Panels need guidance. And the sector will need shared infrastructure that levels the field rather than widening existing gaps.

    Source link

  • REF should be about technical professionals too

    REF should be about technical professionals too

    Every great discovery begins long before a headline or journal article.

    Behind every experiment, dataset, and lecture lies a community of highly skilled technical professionals, technologists, facility managers, and infrastructure specialists. They design and maintain the systems that make research work, train others to use complex equipment, and ensure data integrity and reproducibility. Yet their contribution has too often been invisible in how we assess and reward research excellence.

    The pause in the Research Excellence Framework (REF) is more than a scheduling adjustment, it’s a moment to reflect on what we value within the UK research and innovation sector.

    If we are serious about supporting excellence, we must recognise all those who make it possible, not just those whose names appear on papers or grants, but the whole team, including technical professionals whose expertise enables every discovery.

    Making people visible in research culture

    Over the past decade, there has been growing recognition that research culture, including visibility, recognition, and support for technical professionals is central to delivering world-class outcomes. Initiatives such as the Technician Commitment, now backed by more than 140 universities and research institutes, have led the way in embedding good practice around technical professional careers, progression, and recognition.

    Alongside this, the UK Institute for Technical Skills and Strategy (UK ITSS) continues to advocate for technical professionals nationally to ensure they are visible and their inputs are recognised within the UK’s research and innovation system. These developments have helped reshape how universities think about people, culture, and environment, creating the conditions where all contributors to research and innovation can thrive.

    A national capability – not a hidden workforce

    This shift is not just about fairness or inclusion, it’s about the UK’s ability to deliver on its strategic ambitions. Technical professionals are critical to achieving the goals set out in the UK Government’s Modern Industrial Strategy and to the success of frontier technologies such as artificial intelligence, quantum, engineering biology, advanced connectivity, and semiconductors. These frontier sectors rely on technical specialists to design, operate, and maintain the underpinning infrastructure on which research and innovation depend.

    Without a stable, well-supported technical professional workforce, the UK risks losing the very capacity it needs to remain globally competitive. Attracting, training, and retaining this talent necessitates that technical roles are visible and recognised – not treated as peripheral to research, but as essential to it.

    Why REF matters

    This is where the People, Culture and Environment (PCE) element of the REF becomes critical. REF has always shaped behaviour across the sector. Its weighting signals what the UK values in research and innovation. Some have argued that PCE should be reduced (or indeed removed) to simplify the REF process, ease administrative burden, or avoid what they see as subjectivity in the assessment of research culture. Others have suggested a greater emphasis on environment would shift focus away from research excellence, or that culture work is too challenging to consistently assess across institutions. But these arguments overlook something fundamental, that the quality of our research, the excellence we deliver as a sector, is intrinsically tied to the conditions in which it is produced. As such, reducing the weighting of PCE would send a contradictory message, that culture, collaboration, and support for people are secondary to outputs rather than two sides of the same coin.

    The Stern Review and the Future Research Assessment Programme both recognised the need for a greater focus on research and innovation environments. PCE is not an optional extra, it is fundamental to research integrity, innovation, and excellence. A justifiably robust weighting reflects this reality and gives institutions the incentive to continue investing in healthy, supportive, and inclusive environments.

    Universities have already made significant progress on this by developing new data systems, engaging staff, and benchmarking culture change. There is clear evidence that the proposed PCE focus has driven positive shifts in institutional behaviour. To step away from this now would risk undoing that progress and undermine the growing recognition of technical professionals as central to research and innovation success.

    Including technical professionals explicitly within REF delivers real benefits for both technical professionals and their institutions, and ultimately strengthens research excellence. For technicians, recognition within the PCE element encourages universities to create the kind of environments in which they can thrive – cultures that value their expertise, provide clearer career pathways, invest in skills, and ensure they have the support and infrastructure to contribute fully to research. Crucially, REF 2029 also enables institutions to submit outputs led by technical colleagues, recognising their role in developing methods, tools, data, and innovations that directly advance knowledge.

    For universities, embedding this broader community within PCE strengthens the systems REF is designed to assess. It drives safer, more efficient and sustainable facilities, improves data quality and integrity, and fosters collaborative, well-supported research environments. By incentivising investment in skilled, stable, and, empowered technical teams, the inclusion of technicians enhances the reliability, reproducibility, and innovation potential of research – ultimately raising the standard of research excellence across the institution.

    From hidden to central

    REF has the power not only to measure excellence, but to shape it. By maintaining a strong focus on people and culture, it can encourage institutions to build the frameworks, leadership roles, and recognition mechanisms that enable all contributors, whether technical, academic, or professional, to contribute and excel.

    In doing so, REF can help normalise good practice, embed openness and transparency, and ensure that the environments underpinning discovery are as innovative and excellence driven as the research itself.

    Technical professionals have always been at the heart of UK research. Their skill, creativity, and dedication underpin every discovery, innovation, and breakthrough. What’s changing now is visibility. Their contribution is increasingly recognised and celebrated as foundational to research excellence and national capability.

    As REF evolves, it must continue to reward the environments that nurture, develop, and sustain technical expertise. In doing so, it can help ensure that technical professionals are not just acknowledged but firmly established at the centre of the UK’s research and innovation system – visible, recognised, and vital (as ever) to its future success.

    Source link

  • The REF helps make research open, transparent, and credible- let’s not lose that

    The REF helps make research open, transparent, and credible- let’s not lose that

    The pause to reflect on REF 2029 has reignited debate about what the exercise should encompass – and in particular whether and how research culture should be assessed.

    Open research is a core component of a strong research culture. Now is the time to take stock of what has been achieved, and to consider how REF can promote the next stage of culture change around open research.

    Open research can mean many things in different fields, as the UNESCO Recommendation on Open Science makes clear. Wherever it is practiced, open research shifts focus away from outputs and onto processes, with the understanding that if we make the processes around research excellent, then excellent outcomes will follow

    Trust

    Being open allows quality assurance processes to work, and therefore research to be trustworthy. Although not all aspects of research can be open (sensitive personal data, for example), an approach to learning about the world that is as open as possible differentiates academic research from almost all other routes to knowledge. Open research is not just a set of practices – it’s part of the culture we build around integrity, collaboration and accountability.

    But doing research openly takes time, expertise, support and resources. As a result, researchers can feel vulnerable. They can worry that taking the time to focus on high-quality research processes might delay publication and risk them being scooped, or that including costs for open research in funding bids might make them less likely to be funded; they worry about jeopardising their careers. Unless all actors in the research ecosystem engage, then some researchers and some institutions will feel that they put themselves at a disadvantage.

    Open research is, therefore, a collective action problem, requiring not only policy levers but a culture shift in how research is conducted and disseminated, which is where the REF comes in.

    REF 2021

    Of all the things that influence how research is done and managed in the UK HE sector, the REF is the one that perhaps attracts most attention, despite far fewer funds being guided by its outcome than are distributed to HEIs in other ways.

    One of the reasons for this attention is that REF is one of the few mechanisms to address collective action problems and drive cultural change in the sector. It does this in two ways, by setting minimum standards for a submission, and by setting some defined assessment criteria beyond those minimum standards. Both mechanisms provide incentives for submitting institutions to behave in particular ways. It is not enough for institutions to simply say that they behave in this way – by making submissions open, the REF makes institutions accountable for their claims, in the same way as researchers are made accountable when they share their data, code and materials.

    So, then, how has this worked in practice?

    A review of the main panel reports from REF 2021 shows that evidence of open research was visible across all four main panels, but unevenly distributed. Panel A highlighted internationally significant leadership in Public Health, Health Services and Primary Care (UoA 2) and Psychology, Psychiatry and Neuroscience (UoA 4), while Panel B noted embedded practices in Chemistry (UoA 8) and urged Computer Science and Informatics (UoA 11) to make a wider shift towards open science through sharing data, software, and protocols. Panel C pointed to strong examples in Geography and Environment Studies (UoA 14), and in Archaeology (UoA 15), where collaboration, transparency, and reproducibility were particularly evident. By contrast, Panel D – and parts of Panel C – showed how definitions of open research can be more complex, because what constitutes ‘open research’ is perhaps much more nuanced and varied in these disciplines, and these disciplines did not always demonstrate how they were engaging with institutional priorities on open research and supporting a culture of research integrity. Overall, then, open research did not feature in the reports on most UoAs.

    It is clear that in 2021 there was progress, in part guided by the inclusion in the REF guidance of a clear indicator. However, there is still a long way to go and it is clear open research was understood and evidenced in ways that could exclude some research fields, epistemologies and transparent research practices.

    REF 2029

    With REF 2029, the new People, Culture and Environment element has created a stronger incentive to drive culture change across the sector. Institutions are embracing the move beyond compliance, making openness and transparency a core part of everyday research practice. However, alignment between this sector move, REF policy and funder action remains essential to address this collective action problem and therefore ensure that this progress is maintained.

    To step back now would not only risk slowing, or even undoing, progress, but would send confused signals that openness and transparency may be optional extras rather than essentials for a trusted research system. Embedding this move is not optional: a culture of openness is essential for the sustainability of UK research and development, for the quality of research processes, and for ensuring that outputs are not just excellent, but also trustworthy in a time of mass misinformation.

    Openness, transparency and accountability are key attributes of research, and hallmarks of the culture that we want to see in the sector now and in the future. Critically, coordinated sector-wide, institutional and individual actions are all needed to embed more openness into everyday research practices. This is not just about compliance – it is about a genuine culture shift in how research is conducted, shared and preserved. It is about doing the right thing in the right way. If that is accepted, then we would challenge those advocating for reducing the importance of those practices in the REF: what is your alternative, and will it command public trust?

     

    This article was supported by contributions from:

    Michel Belyk (Edge Hill University), Nik Bessis (Edge Hill University), Cyclia Bolibaugh (University of York), Will Cawthorn (University of Edinburgh), Joe Corneli (Oxford Brookes University), Thomas Evans (University of Greenwich), Eleanora Gandolfi (University of Surrey), Jim Grange (Keele University), Corinne Jola (Abertay University), Hamid Khan (Imperial College, London), Gemma Learmonth (University of Stirling), Natasha Mauthner (Newcastle University), Charlotte Pennington (Aston University), Etienne Roesch (University of Reading), Daniela Schmidt (University of Bristol), Suzanne Stewart (University of Chester), Richard Thomas (University of Leicester), Steven Vidovic (University of Southampton), Eric White (Oxford Brookes University).

    Source link

  • The disagreements on REF cannot go on forever – it may be time for a compromise

    The disagreements on REF cannot go on forever – it may be time for a compromise

    The submission deadline for REF is autumn 2028. It is not very far away and there are still live debates on significant parts of the exercise without an obvious way forward in sight.

    As the Contributions to Knowledge and Understanding guidance makes clear there are still significant areas where guidance is being awaited. The People, Culture and Environment (PCE) criteria and definitions will be published in autumn this year. Undoubtedly, this will kick off rounds of further debate on REF and its purposes. It feels like there is a lot left to do with not much time left to do it in.

    Compromise

    The four UK higher education funding bodies could take a view that the levels of disquiet in the sector about REF, and what I am hearing at the events I go to and from the people I speak to it does seem significant, will eventually dissipate as the business of REF gets underway.

    This now seems unlikely. It is clear that there are increasingly entrenched views on the workability or not of the new portability measures, and there is still the ongoing debate on the extent to which research culture can be measured. Research England has sought to take the sector toward ends which have broad support, improving the diversity and conditions of research, but there is much less consensus on how to get there.

    The consequences for continuing as is are unpredictable but they are potentially significant. At the most practical level the people working on REF only have so much resource and bandwidth. The debate about the future of REF will not go away as more guidance is released, in fact the debate is likely to intensify, and getting to submission where there is still significant disagreement will drain resources and time.

    The debate also crowds out the other work that is going on in research. All the while that the future of REF is being debated it is time taken away from all of the funding which is not allocated through REF, all of the problems with research that do not stem from this quinquennial exercise, and the myriad of other research issues that sit beyond the sector’s big research audit. The REF looms large in the imagination of the sector but the current impasse is eclipsing much else.

    If the government believes that REF does not have broad support from the sector it could intervene. It is faulty to assume that the REF is an inevitable part of the research landscape. As Chancellor, Gordon Brown attempted to axe its predecessor on the basis that it had become too burdensome. Former advisor to the Prime Minister Dominic Cummings also wished to bin the REF. UCU opposed REF 2014. Think Tank UK Day One also published a well shared paper on the argument for scrapping the current REF.

    The REF has survived because of lack of better alternatives, its skilful management, and its broad if not sometimes qualified support. The moment the political pain of REF outweighs its perceived research benefits it will be ripe for scrapping by a government committed to reducing costs and reducing the research burden.

    The future

    The premise of the new REF is that research is a team sport and the efforts of the team that create the research should be measured and therefore rewarded. The corollary of identifying research as a product of a unit rather than an individual is that the players, in this case researchers and university staff, have had their skills unduly diminished, hidden, or otherwise not accounted for because of pervasive biases in the research landscape.

    It is impossible to argue that by any reasonable measure there aren’t significant issues with equality in research. This impacts the lives and career prospects of researchers and the UK economy as whole. It would be an issue for any serious research funder to back away from work that seeks to improve the diversity of research.

    It is in this light where perhaps the biggest risk of all lies for Research England. If it pushes on with the metrics and measures it currently has and the result of REF is seen as unfair or structurally unsound it will do irreversible harm to the wider culture agenda. The idea of measuring people, culture, and environment will be put into the “too hard to do” box.

    This work is too important to be done quickly but the urgency of the challenge cannot be dropped. It is an unenviable position to be in.

    REF 2030?

    If a conclusion is reached that it is not feasible to carry the sector toward a new REF in time for 2029 there only seems to be one route forward which is to return to a system more like 2021. This is not because the system was perfect (albeit it was generally seen as a good exercise) but because it would be unfeasible to carry out further system changes at this stage. Pushing the exercise back to 2030 would mean allocating funding from an exercise completed almost a decade prior. It seems untenable to do so because of how much institutions will have changed in this period.

    The work going on to measure PCE is not only helpful in the context of REF but alongside work coming out of the Metascience Unit and UKRI centrally, among others, part of the way in which the sector can be supported to measure and build a better research culture and environment. This work within the pilots is of such importance that it would make sense to stand these groups up over a long time period with a view to building to the next exercise, while improving practice within universities more generally on an ongoing basis.

    As I wrote back in 2023 complexity in REF is worthwhile where it enhances university research. The complexity has now become the crux of the debate. If Research England reaches the conclusion that the cost and complexity of the desired future outstrips the capacity and knowledge of the present, the opportunity is to pause, pilot, learn, improve, and go again.

    Tactical compromise for now – with the explicit intention of taking time to agree a strategic direction on research as more of a shared and less of an individual endeavour – is possible. To do so it will require making the political and practical case for a different future (as well as the moral one) ever more explicit, explaining the trade-offs it will involve, and crucially building a consensus on how that future will be funded and measured. Next year is a decade on from the Stern Review; perhaps it is time for another independent review of REF.

    A better future for research is possible but only where the government, funders, institutions, and researchers are aligned.

    Source link

  • REF panels must reflect the diversity of the UK higher education sector

    REF panels must reflect the diversity of the UK higher education sector

    As the sector begins to prepare for REF 2029, with a greater emphasis on people, culture and environment and the breadth of forms of research and inclusive production, one critical issue demands renewed attention: the composition of the REF panels themselves. While much of the focus rightly centres on shaping fairer metrics and redefining engagement and impact, we should not overlook who is sitting at the table making the judgments.

    If the Research Excellence Framework is to command the trust of the full spectrum of UK higher education institutions, then its panels must reflect the diversity of that spectrum. That means ensuring meaningful representation from a wide range of universities, including Russell Group institutions, pre- and post-92s, specialist colleges, teaching-led universities, and those with strong regional or civic missions.

    Without diverse panel representation, there is a real risk that excellence will be defined too narrowly, inadvertently privileging certain types of research and institutional profiles over others.

    Broadening the lens

    Research excellence looks different in different contexts. A university with a strong regional engagement strategy might produce research that is deeply embedded in local communities, with impacts that are tangible but not easily measured by traditional academic metrics, but with clear international excellence. A specialist arts institution may demonstrate world-leading innovation through creative practice that doesn’t align neatly with standard research output categories.

    The RAND report looking at the impact of research through the lens of the REF 2021 impact cases rightly recognised the importance of “hyperlocality” – and we need to ensure that research and impact is equally recognised in the forthcoming REF exercise.

    UK higher education institutions are incredibly diverse, with different institutions having distinct missions, research priorities, and challenges. REF panels that lack representation from the full spectrum of institutions risks bias toward certain types of research outputs or methodologies, particularly those dominant in elite institutions.

    Dominance of one type of institution on the panels could lead to an underappreciation of applied, practice-based, or interdisciplinary research, which is often produced by newer or specialist institutions.

    Fairness, credibility, and innovation

    Fair assessment depends not only on the criteria applied but also on the perspectives and experiences of those applying them. Including assessors from a wide range of institutional backgrounds helps surface blind spots and reduce unconscious bias. It also allows the panels to better understand and account for contextual factors, such as variations in institutional resources, missions, and community roles, when evaluating submissions.

    Diverse panels also enhance the credibility of the process. The REF is not just a technical exercise; it shapes funding, reputations, and careers. A panel that visibly includes internationally recognised experts from across the breadth of the sector helps ensure that all institutions – and their staff – feel seen, heard, and fairly treated, and that a rigorous assessment of UK’s research prowess is made across the diversity of research outputs whatever their form.

    Academic prestige and structural advantages (such as funding, legacy reputations, or networks) can skew assessment outcomes if not checked. Diversity helps counter bias that may favour research norms associated with more research established institutions. Panel diversity encourages broader thinking about what constitutes excellence, helping to recognize high-quality work regardless of institutional setting.

    Plus there is the question of innovation. Fresh thinking often comes from the edges. A wider variety of voices on REF panels can challenge groupthink and encourage more inclusive and creative understandings of impact, quality, and engagement.

    A test of the sector’s commitment

    This isn’t about ticking boxes. True diversity means valuing the insights and expertise of panel members from all corners of the sector and ensuring they have the opportunity to shape outcomes, not just observe them. It also means recognising that institutional diversity intersects with other forms of diversity, including protected characteristics, professions and career stage, which must also be addressed.

    The REF is one of the most powerful instruments shaping UK research culture. Who gets to define excellence in the international context has a profound impact on what research is done, how it is valued, and who is supported to succeed. REF panels should reflect the diversity of UK HEIs to ensure fairness, credibility, and a comprehensive understanding of research excellence across all contexts.

    If REF 2029 is to live up to the sector’s ambitions for equity, inclusion, and innovation, then we must start with its panels. Without diverse panels, the REF risks perpetuating inequality and undervaluing the full range of scholarly contributions made across the sector, even as it evaluates universities on their own people, culture, and environment. The composition of those panels will be a litmus test for how seriously we take those commitments.

    Source link

  • The risk of unrepresentative REF returns hasn’t gone away

    The risk of unrepresentative REF returns hasn’t gone away

    The much awaited Contributions to Knowledge and Understanding (CKU) guidance for REF 2029 is out, and finally higher education institutions know how the next REF will work for the outputs component of the assessment. Or do they?

    Two of us have written previously about the so-called portability issue, where if a researcher moves to a new institution, it is the new institution to which the research outputs are credited and potentially future REF-derived funding flows.

    We and others have argued that this portability supports the mobility of staff at the beginning of their careers and the mobility of staff that are facing redundancy. We believe that this is an important principle, which should be protected in the design of the current REF. If we believe that the higher education system should nurture talent, then the incentive structure underpinning the REF should align with this principle.

    We maintain that the research, its excellence, and the integrity with which it is performed depends upon the people that undertake it. Therefore, we continue to support some degree of portability as per REF 2021, acknowledging that the situation is complex and that this support of individual careers can come at the expense of the decoupling and the emerging focus on institutions. The exceptions delineated around “longform and/or long process outputs” in the CKU guidance are welcome – the devil will lie in the detail.

    Who the return represents

    Leaving aside portability, the decoupling of outputs from individuals has also resulted in a risk to the diversity of the return, especially in subject areas where the total number of eligible outputs is very high.

    In previous REF exercises the rules were such that the number of outputs any one researcher could return to the department/unit’s submission was restricted (four in REF 2014 and five in REF 2021). This restriction ensured that each unit’s return comprised a diversity of authors, a diversity of subdisciplines and diversity of emerging ideas.

    We recognise that one could argue the REF is an excellence framework, not a diversity framework. However – like many – we believe that REF also has a role to play in supporting the inclusive research community we all wish to champion. REF is also about a diversity – of approaches, of methodologies, of research areas – research needs diversity to ensure the effective teams are in place to deliver on the research questions. What would the impact be on research strategies if individual units increasingly are dominated by a small number of authors?

    How the system plays out

    Of course, the lack of restriction on output numbers does not preclude units from creating a diverse return. However, especially in this time of sector-wide financial pressures, those in charge of a submission may feel they have no option other than to select outputs to maximise the unit score and hence future funding.

    This unbounded selection process will likely lead to intra-unit discord. Even in an ideal case will result in the focus being on outputs covering a subset of hot topics, or worse, subset of perceived high-quality journals. The unintended consequence of this focus could place undue importance on the large research groups led by previously labelled “research stars”. For large HEIs with large units including several of these “stars”, the unit return might still appear superficially diverse, but the underlying return might be remarkably narrow.

    While respecting fully the contribution made by these traditional leaders, we think the health of our research future critically depends upon the championing of the next and diverse generation of researchers and their ideas too. We maintain the limits imposed in previous exercises did this, even if that was not their primary intent.

    Some might, for a myriad of reasons, think that our concerns are misplaced. The publication of the guidance suggest that we have not managed to land these important points around diversity and fairness.

    However, we are sure that many of those who have these views wish to see a diverse REF return too. If we have not persuaded Research England and the other funding councils to reimpose output limits, we urge them at least to ensure that the data is collected as part of the process such that the impact upon the diversity of this unrestricted return can be monitored and hence that future REF exercises can be appropriately informed. This will then allow DSIT and institutions to consider whether the REF process needs to be adjusted in future.

    Our people, their excellence and their diversity, we would argue, matter.

    Source link

  • Podcast: Spending review, Tooling Up, REF, students at work

    Podcast: Spending review, Tooling Up, REF, students at work

    This week on the podcast we examine the government’s spending review and what it means for higher education. How will the £86bn R&D commitment translate into real-terms funding, and why was education notably absent from the Chancellor’s priorities?

    Plus we discuss the Post-18 Project’s call to fundamentally reshape HE policy away from market competition, the startling new REF rules, and the striking rise in student term-time working revealed by the latest Student Academic Experience Survey.

    With Stephanie Harris, Director of Policy at Universities UK, Ben Vulliamy, Executive Director at the Association of Heads of University Administration, Michael Salmon, News Editor at Wonkhe, and presented by Mark Leach, Editor-in-Chief at Wonkhe.

    Tooling up: Building a new economic mission for higher education

    Investing for the long term often loses out to pensioner power

    What’s in the spending review for higher education

    The student experience is beyond breaking point

    How to assess anxious, time-poor students in a mass age

    REF is about institutions not individuals

    Source link

  • REF is about institutions not individuals

    REF is about institutions not individuals

    The updated guidance on Contributions to Knowledge and Understanding (CKU: formerly known as outputs) will be seen as the moment it became clear what REF is.

    REF is not about solely, or even mostly, measuring researcher performance. Its primary purpose is to assess how organisations measure research excellence.

    It is the release which signals that research may be produced by individuals but it is assessed at an institutional level and the only measure that matters is whether the institution was responsible for supporting the research that led to the output.

    2014 Redux

    It is worth rehashing how we got here.

    REF is the tool Research England and its devolved equivalents use to decide how much QR funding universities will receive. One thing it measures is the research output of universities. The research output of universities are the outputs of the researchers that work there (or a sample of the outputs.)

    The question that REF has always grappled with is whether to measure the quality of research or the quality of researchers. The latter would be quite a straightforward exercise and one that has been done in different formats over the years. Get a cross-sample of researchers to submit their best research at a given point in time and then ask a panel to rate its quality.

    Depending on the intended policy output the exercise might make every researcher submit some research to ensure a sample is truly representative. It might limit how much any one researcher can submit to ensure a sample is balanced. It might tweak measurements in any number of ways to change what a researcher can submit and when depending on the objectives of the exercise.

    The downside of this approach is that it is not an entirely helpful way to understand the quality of university research across an entire institution. It tells you how good researchers are within a specific field, like a Unit of Assessment, but it does not tell us how good the provider is at creating the conditions in which that research takes place. Unless you believe, and it is not an unreasonable belief, that there is no difference between the aggregate of individual research outputs and the overall quality of institutional research.

    Individuals and teams

    To look at it another way. Jude Bellingham looks very different playing for England than he does Real Madrid. He is still the same footballer with the same skills and same flaws. It is that for Real Madrid he is playing for a team with an ethos of excellence and a history of winning. And for England he is playing for a team that consistently fails to achieve anything of note.

    The only fair way to measure England is not to use Jude Bellingham as a proxy of their performance but to measure the performance of the England team over a defined period of time. In other words, to decouple Bellingham’s performance from England’s overall output.

    As put in a rather punchy blog by Head of REF Policy Jonathan Piotrowski,

    REF 2029 shifts our focus away from the individual and towards the environment where that output was created and how it was supported. This change in perspective is essential for two key reasons: first, to gather the right evidence to inform funding decisions that enable institutions to support more excellent research and second, to fundamentally recognise the huge variety of roles and outputs that contribute to the research ecosystem, including those whose names may not appear as authors and outputs that extend beyond traditional journal publications.

    Who does research?

    The philosophical questions are whether research is created by researchers, institutions, or both and to what degree. And in a complex system involving teams of researchers, businesses, and institutions, whether it is any easier or accurate to ascribe outputs to researchers than it is to institutions. The policy implication is that providers should be less concerned about who is doing research but the conditions in which research occurs. The upshot is that the research labour market will become less dynamic, there is less incentive to appoint people as they are “REFable”, which will have both winners and losers.

    The mechanism for decoupling in REF 2029 is to remove the link between staff and their outputs. The new guidance sets out precisely how this decoupling process will work.

    There will be no staff details submitted and outputs will not be submitted linked to a specific author. Instead, outputs are submitted to a Unit of Assessment. This is not a new idea. The 2016 review of the REF (known as the Stern Review) recommended that

    The non-portability of outputs when an academic moves institution should be helpful to all institutions including smaller institutions with strong teams in particular areas which have previously been potential targets for ‘poaching’.

    However, it is worth emphasising that this is an enormous change from previous practice. In REF 2014 the whole output was captured by whichever institution a researcher was at, at the REF census date. In REF 2021 if a researcher moved between institutions the output was captured by both. In REF 2029 the output will be captured by the institution where there is a “substantive link.”

    Substantive links

    A substantive link will usually be demonstrated by employment of a period of 12 months at least 0.2 FTE equivalent. The staff member does not have to be at the provider at the point the output is submitted. Other indicators may include

    evidence of internal research support (for example, funding for research materials, technical or research support, conference attendance) evidence of work in progress presentations (internally and externally) evidence of an external grant to support a relevant program of research.

    In effect, this means that the link between researchers and REF is that their research took place in a specific institution, but it is ultimately the institution that is being assessed. The thing that is being assessed is the relationship between the research environment and the creation of the output. Not the relationship between the output and the researcher.

    As the focus of assessment shifts so do the rules on what can or cannot be submitted. As we know from previous guidance there is no maximum or minimum submissions from staff members. There may be some researchers at, or who were at, a provider who find their work appears in an institution’s submissions a number of times, and maybe even across disciplines (there will no be now no inter-disciplinary flags but an output may be submitted to more than one UOA and receive different scores.)

    The obvious challenge here is that while providers should submit representative outputs the overriding temptation will be to submit what they believe to be their “best” and then work backwards to justify why it is representative. The REF team have anticipated this problem and the representativeness of a submission will be assessed through the disciplinary led evidence statements. The full guidance on what these contain is yet to be released but we know that

    The important issues of research diversity, demographics and career stages will be assessed as part of the wider disciplinary level evidence statements

    Research England’s position is that aligning outputs to where they are created, not who creates them is a better way to measure institutional research performance. This should also end the incentive for universities to recruit researchers and in doing so capture their REF output. The thinking is that this favours the larger universities that can afford to poach research staff.

    Debates had and debates to come

    In a previous piece for Wonkhe Maria Delgado, Nandini Das, and Miles Padgett made the case that portability is key to fairness in REF. The opposite argument that is being put forward by Research England. Maria, Nandini, and Miles made the case that whether we like it or not one of the ways in which academics secure better career prospects is by improving the REF performance of a provider’s UOA. Research England makes the case that

    The core motivation is to minimise the REFs ability to exert undue influence on people’s careers. To achieve this, institutional funding (remember, QR funding does not track to individuals or departments) should follow the institutions that have genuinely provided and invested in the environment in which research is successful. Environments that recognise the collaborative nature of research and the diverse roles involved, rather than simply rewarding institutions positioned to recruit researchers to get reward for their past output.

    It is possible that both arguments may be right. If outputs are tied to institutions the incentive for institutions who want to do well in REF is to capture a greater number of high quality outputs to include in their submission. The way to do this is to have more researchers supported to do high quality work. On the other hand, at an individual level and in a time of financial crisis for the sector, there are likely some researchers who benefit from being able to take their research output with them when they move institutions.

    In the comments of our initial portability piece it was flagged that researchers’ work could form part of an assessment where they had no relationship with the provider. This feels particularly egregious if they have been made redundant as part of wider cost saving. The message being that the research output is high quality but nonetheless it is necessary to remove your post. The REF team have considered this and

    Outputs where the substantive link occurred before the submitted output was made publicly available, will not be eligible for submission where the author was subject to compulsory redundancy.

    The guidance explains that there may be times where there is a substantive relationship but the research has not yet been published. On the face of it this seems a sensible compromise but if the logic is that a provider is the place where research outputs are created it seems contradictory (albeit kinder) to then limit the conditions through which that work can be assessed. It is possible there will be some outputs which were in the process of being published but not yet assessed which would fall into this clause.

    The guidance confirms a direction of travel that was established as far back as REF 2021 and made clear in the guidance so far for REF 2029. While the debate on who should be assessed in which circumstances continues the wider concern for many will be that there is still significant guidance outstanding, particularly on People Culture and Environment, and the submission window for REF closes in 30 months from now.

    A direction has been set. The sector needs to know the precise rules they are playing by if they are going to go along with it. There is undoubtedly a lot of good will around measuring research environments, culture, and the ways in which outputs are created more comprehensively. That good will, will evaporate if guidance is not timely, clear, or complete.

    Source link

  • Portability within REF remains key to fairness

    Portability within REF remains key to fairness

    When a researcher produces an output and moves between HEIs, portability determines which institution can submit the output for assessment and receive the resulting long-term quality-related funding.

    However, a joint letter by the English Association, the Institute of English Studies, and University English, and subsequent interventions from other subject associations, demonstrate that unaddressed concerns over the portability of research outputs are coming to a head.

    In REF 2014, if a researcher moved HEI prior to a census date, then only the destination HEI submitted the output. In 2021, to mitigate the potential perceived inflationary transfer market of researchers, the rules were changed so that if researchers transferred, both the original and destination HEIs could return the output. This rightfully recognised the role of both HEIs, having supported the underpinning research and investing in the research of the future respectively.

    The initial decisions published in 2023 had research outputs decoupled from the authors with outputs needing to have a “substantive connection” to the submitting institution. Two years on we still don’t know the impact of this decision on portability. One of the unintended consequences of decoupling the outputs from the researchers who authored them and removing the notion of a staff list, is that only the address line of the author affiliation remains. This decoupling means that any notion of portability of outputs with a specific researcher is problematic.

    The portability of research outputs is a crucial element of the assessment process. It supports key values such as career security and development, equality, diversity, and inclusion, as well as the financial sustainability of HEIs. More importantly, linking outputs to individual researchers rather than institutions is necessary, particularly in the current Higher Education landscape, to ensure the integrity of both research and the assessment exercise itself. This approach ensures that researchers receive due credit for their work, prevents institutions from unfairly benefiting from outputs produced elsewhere or from structural changes such as departmental closures, and upholds a fairer, more transparent system that reflects actual research contributions.

    The sector is in a different place than it was even a few years ago. Many HEIs are financially challenged, with wide-spread redundancies an ongoing reality. Careers are now precarious at every career stage. Making new, or even maintaining, academic appointments is subject to strict financial scrutiny. Across all facets of research – from the medical and engineering sciences to the arts and humanities – the income derived from the REF is essential to the agility of the research landscape.

    Whether we like it or not, the decision to hire someone is in part financial. That an early career researcher could be recruited to improve a unit’s (subject) REF submission and hence income is a reality of a financially pressured system. At a different career stage, many distinguished researchers are facing financially imposed redundancy. The agility of the sector to respond is aided by the portability of the researcher’s outputs to allow them to continue their career and their contributions to the sector at a new HEI. The REF derived income is an important aspect of this agility.

    Setting aside financial considerations, separating research outputs from the researchers who created them sends a damaging message. It downplays the fundamental role of individuals in driving research and undermines the sense of agency that is crucial to its integrity and rigor.

    Auditing the future

    As researchers, we recognise the privilege of being supported in pursuing what is often both a passion and a vocation. Decoupling outputs from their creators disregards the individual researcher, their collaborations, and their stakeholders. It also oversimplifies the complex research ecosystem, where researchers work in partnership with their employing institutions, sector bodies, archives, charities, funders, and other key stakeholders.

    REF-derived income should not be seen just as a retrospective reward for an HEI’s past support of research, but rather as the nation’s forward-looking investment in the discoveries of tomorrow. To treat it merely as an audit is to overlook its transformative potential. Hence the outputs on which the assessment is based should be both the researchers who contributed to the unit while employed by the university and the researchers who are currently in the unit to contribute to the research that is ongoing, indelibly linking and interweaving past, present and future research.

    In addition to concerns over portability, decoupling outputs from the researchers that authored them risks undermining a central premise of the assessment that many of us working to improve our research culture want to see. Decoupling means there is no auditable limit to the number of outputs written by any one individual that can be submitted for assessment. Within the REF, we wish to see outputs authored by a diversity of staff within the unit, staff at different career stages and staff working in different sub areas. By decoupling the author from outputs, a future REF risks undermining the very fairness that the rule change was introduced to ensure.

    Not fair not right

    Sometimes the unintended consequences of an idea outweigh the benefits it was hoping to achieve. The decoupling of outputs from the researchers that made them possible and the knock-on consequences through restrictions to portability and reduced diversity is one of these occasions.

    There has never been a more critical time to uphold fairness in research policy.

    If the four funding bodies are to remain agile they must recognise that decoupling research outputs from the individuals who created them is not only harming those facing redundancy but also undermining HEIs’ ability to support the next generation of researchers upon whom our future depends. By the same count, ensuring the portability of outputs is essential for maintaining integrity, protecting careers, and sustaining a dynamic and equitable research environment. The need for change is both urgent and imperative.

    Source link

  • Research supervision in the context of REF – time for a step change?

    Research supervision in the context of REF – time for a step change?

    At a time when resources within research organisations are stretched, the PGR experience, and the role doctoral supervisors play in supporting that experience, needs closer attention.

    The release of the pilot indicators for the REF People Culture and Environment (PCE) has promoted a flurry of conversations across UK universities as to what ‘counts’. For the first time, institutions may evidence that “infrastructure, processes and mechanisms in place to support the training and supervision of research students are working effectively” and are invited to consider the inclusion of “pre and post training assessments” for supervisors.

    This signals to institutions that research supervision needs to be taken seriously– both in terms of quality and consistency of PGR experience, as well as the support and recognition for supervisors themselves. In doing so it validates the contribution of doctoral research to the research ecosystem.

    Accelerated prioritisation of research supervision shouldn’t come as a complete surprise. This lack of consistency in PGR experience was recognised less than a year ago in the UKRI New Deal for Postgraduate Research, which stated that “All PGR students should have access to high quality supervision and Research Organisations should ensure that everyone in the supervisory team is well supported, including through induction for new supervisors and Continuous Professional Development (CPD)”. That messaging has been repeated in the UKRI Revised Statement of Expectations for Doctoral Training (2024), alongside a call to research organisations to build supervisor awareness of PGR mental health, wellbeing, bullying and harassment, and equality, diversity and inclusion issues.”

    So, what do we know about research supervision?

    Data from the UK Research Supervision Survey 2024 (UKRSS) confirms that, overwhelmingly, research supervision is considered valuable, rewarding and enjoyable by those who undertake it. Supervision also positively impacts upon their own research. However, a third of respondents reported feeling anxious about supervision and reported their main challenge was fostering student confidence and focus, followed by offering compassionate support to students facing difficult issues ranging from mental health and wellbeing, to finances and funding.

    Lack of time continues to be a barrier to high quality supervision practice, and rising supervisor-to-candidate ratios complicate this further. While early career supervisors were likely to be allocated one to two candidates, those later in their career could be supervising five to ten– only 30 per cent of UKRSS respondents reported that their institution had a policy on maximum candidate numbers. Respondents also made it clear that doctoral research supervision is not being adequately calculated into workload allocations, with a typically described workload model allocating 42 hours per candidate, per year, but supervisors reporting investing an average of 62 hours.

    Time constraints like these contribute greatly to the ability of supervisors to participate in CPD opportunities. This itself is a barrier to good supervision practice, as the UKRSS revealed that supervisors who engage in regular, mandatory CPD reported higher levels of confidence in all areas of supervisory practice. A staggering 91 per cent of respondents who had experienced mandatory induction reported they felt able to enact their institutions’ procedures around supervision– compared to 66 per cent of those for whom induction was not mandatory and 55 per cent who reported no mandatory requirements..

    The data illustrates that supervisors care about and take satisfaction from supporting the next generation of researchers, but they are getting a raw deal from their institutions in terms of time, reward, recognition and opportunities to develop and enhance their own practice. Underscoring this point, just 56 per cent of supervisors reported feeling valued by their institution, compared to 90 per cent who felt valued by their students. Until now this has gone under the radar, making the inclusion of the PCE indicators a welcome sign for those of us working to make changes within the sector.

    Engaging supervisors with high quality Continuing Professional Development

    Focus groups conducted with supervisors at five UK universities as part of the Research England funded Next Generation Research SuperVision Project (RSVP), have provided insight into what CPD is considered useful, meaningful and relevant. Supervisors were well aware of the need to develop and improve their practice, with one participant reflecting “… there isn’t sufficient training for supervision, you have a huge responsibility to another person’s career. So I think the idea that we ‘wing it’ perhaps shouldn’t be acceptable.”

    An overwhelming majority of participants reported that the most important aspects of their supervision practice and development come from interactions with, and support from, their peers and more experienced colleagues. The idea that supervision practice is best developed by watching other supervisors on the job and through communities of practice was repeated by participants across experience levels, genders, disciplines, and institutions– with some even claiming this to be the only way to become a truly good supervisor.

    Far from being reluctant to engage in professional development, many supervisors welcomed the idea of having the space and time to reflect on their practice. What they were less keen on was anything perceived as a ‘tick-box’ exercise– examples given included short courses without time for discussion, and self-directed online modules. There was a recognition by some that these approaches can be useful, but should form part of a more varied approach to CPD.

    Generally speaking, supervisors with less experience were more likely to engage in facilitated workshops and other interventions that help them understand their role and the doctoral journey. Those with more experience expressed a strong preference for discussion-based CPD, including peer reading groups, opportunities for facilitated reflection and mentoring.

    Recognising supervision as part of research culture

    Whatever the final version of the PCE metrics look like, there is now a growing body of empirical evidence to suggest that a revision in the way we manage, reward and recognise research supervision is needed. When government enabled universities to introduce fees for undergraduates the issue of quality assurance quickly surfaced. It was recognised that students should be taught by properly trained staff with a knowledge and understanding of pedagogy and approaches to learning and teaching. Arguably that moment has now come for research supervision.

    If the UK HE sector wishes to attract capable, committed, creative doctoral candidates from a range of backgrounds then those supervising them need to be treated, and trained, as professional practitioners. This means creating the time and space to enable supervisors at all levels of experience to engage in meaningful exchanges about their practice and to refresh their knowledge of policies and new areas as they arise.

    Quick wins?

    For institutions looking for ways to bolster their supervision support there are some empirically grounded ways to improve practice

    Firstly, tap into existing levers for change. The Concordat to Support the Career Development of Researchers outlines the need for PIs (many of whom are supervisors) to engage in professional development. Postdoctoral researchers are also required to engage in “10 days of professional development.” Since postdoctoral researchers are often informally involved in doctoral supervision (15% of the UKRSS respondents identified themselves as ‘early career researchers’) their engagement in CPD could also be counted. Actively recognising and celebrating the diversity of doctoral researchers and their supervisors also aligns with Athena Swan.

    Secondly, increase the visibility of provision. Many supervisors in the UKRSS and focus groups didn’t know what CPD was available in their institution. Very few knew about routes to recognition of supervisory practice (e.g.through the UKCGE Research Supervision Recognition Programme). There is little to be lost in an institution showcasing themselves to prospective researchers and funders as one which takes the quality of supervision seriously and actively invests, rewards and recognises supervisors.

    Thirdly, actively enable conversations about supervision. Aside from the formal training it is the time spent together which is often valuable. This may include offering simple opportunities for new and experienced supervisors to come together to talk about their experiences on topics that matter to them. It may mean enlisting a few champions who will speak about their experience. If there is already a mentoring scheme research supervision could be added to the list of topics that can be discussed as part of that relationship. It is also helpful to encourage supervisors to engage with the UKCGE Supervisor’s Network which offers cross-disciplinary and national level value as a community of practice.

    Finally, use existing PGR and supervisor networks and expert spaces to find out what works well and where the gaps are. Including working with RSVP which is designing, with 58 partners, CPD interventions for new and more experienced supervisors around the topics identified above. Following pilots and evaluation these will be made freely available to the sector. Specific resources to support supervisors to engender a *neurodiversity-affirmative culture will be available later this year. Webpages to support mentoring will be available very soon. Join the RSVP mailing list to be kept up to date.

     

    *with thanks to Professor Debi Riby at the centre for Neurodiversity & Development at Durham University

    Source link