Category: REF 2029

  • The disagreements on REF cannot go on forever – it may be time for a compromise

    The disagreements on REF cannot go on forever – it may be time for a compromise

    The submission deadline for REF is autumn 2028. It is not very far away and there are still live debates on significant parts of the exercise without an obvious way forward in sight.

    As the Contributions to Knowledge and Understanding guidance makes clear there are still significant areas where guidance is being awaited. The People, Culture and Environment (PCE) criteria and definitions will be published in autumn this year. Undoubtedly, this will kick off rounds of further debate on REF and its purposes. It feels like there is a lot left to do with not much time left to do it in.

    Compromise

    The four UK higher education funding bodies could take a view that the levels of disquiet in the sector about REF, and what I am hearing at the events I go to and from the people I speak to it does seem significant, will eventually dissipate as the business of REF gets underway.

    This now seems unlikely. It is clear that there are increasingly entrenched views on the workability or not of the new portability measures, and there is still the ongoing debate on the extent to which research culture can be measured. Research England has sought to take the sector toward ends which have broad support, improving the diversity and conditions of research, but there is much less consensus on how to get there.

    The consequences for continuing as is are unpredictable but they are potentially significant. At the most practical level the people working on REF only have so much resource and bandwidth. The debate about the future of REF will not go away as more guidance is released, in fact the debate is likely to intensify, and getting to submission where there is still significant disagreement will drain resources and time.

    The debate also crowds out the other work that is going on in research. All the while that the future of REF is being debated it is time taken away from all of the funding which is not allocated through REF, all of the problems with research that do not stem from this quinquennial exercise, and the myriad of other research issues that sit beyond the sector’s big research audit. The REF looms large in the imagination of the sector but the current impasse is eclipsing much else.

    If the government believes that REF does not have broad support from the sector it could intervene. It is faulty to assume that the REF is an inevitable part of the research landscape. As Chancellor, Gordon Brown attempted to axe its predecessor on the basis that it had become too burdensome. Former advisor to the Prime Minister Dominic Cummings also wished to bin the REF. UCU opposed REF 2014. Think Tank UK Day One also published a well shared paper on the argument for scrapping the current REF.

    The REF has survived because of lack of better alternatives, its skilful management, and its broad if not sometimes qualified support. The moment the political pain of REF outweighs its perceived research benefits it will be ripe for scrapping by a government committed to reducing costs and reducing the research burden.

    The future

    The premise of the new REF is that research is a team sport and the efforts of the team that create the research should be measured and therefore rewarded. The corollary of identifying research as a product of a unit rather than an individual is that the players, in this case researchers and university staff, have had their skills unduly diminished, hidden, or otherwise not accounted for because of pervasive biases in the research landscape.

    It is impossible to argue that by any reasonable measure there aren’t significant issues with equality in research. This impacts the lives and career prospects of researchers and the UK economy as whole. It would be an issue for any serious research funder to back away from work that seeks to improve the diversity of research.

    It is in this light where perhaps the biggest risk of all lies for Research England. If it pushes on with the metrics and measures it currently has and the result of REF is seen as unfair or structurally unsound it will do irreversible harm to the wider culture agenda. The idea of measuring people, culture, and environment will be put into the “too hard to do” box.

    This work is too important to be done quickly but the urgency of the challenge cannot be dropped. It is an unenviable position to be in.

    REF 2030?

    If a conclusion is reached that it is not feasible to carry the sector toward a new REF in time for 2029 there only seems to be one route forward which is to return to a system more like 2021. This is not because the system was perfect (albeit it was generally seen as a good exercise) but because it would be unfeasible to carry out further system changes at this stage. Pushing the exercise back to 2030 would mean allocating funding from an exercise completed almost a decade prior. It seems untenable to do so because of how much institutions will have changed in this period.

    The work going on to measure PCE is not only helpful in the context of REF but alongside work coming out of the Metascience Unit and UKRI centrally, among others, part of the way in which the sector can be supported to measure and build a better research culture and environment. This work within the pilots is of such importance that it would make sense to stand these groups up over a long time period with a view to building to the next exercise, while improving practice within universities more generally on an ongoing basis.

    As I wrote back in 2023 complexity in REF is worthwhile where it enhances university research. The complexity has now become the crux of the debate. If Research England reaches the conclusion that the cost and complexity of the desired future outstrips the capacity and knowledge of the present, the opportunity is to pause, pilot, learn, improve, and go again.

    Tactical compromise for now – with the explicit intention of taking time to agree a strategic direction on research as more of a shared and less of an individual endeavour – is possible. To do so it will require making the political and practical case for a different future (as well as the moral one) ever more explicit, explaining the trade-offs it will involve, and crucially building a consensus on how that future will be funded and measured. Next year is a decade on from the Stern Review; perhaps it is time for another independent review of REF.

    A better future for research is possible but only where the government, funders, institutions, and researchers are aligned.

    Source link

  • REF panels must reflect the diversity of the UK higher education sector

    REF panels must reflect the diversity of the UK higher education sector

    As the sector begins to prepare for REF 2029, with a greater emphasis on people, culture and environment and the breadth of forms of research and inclusive production, one critical issue demands renewed attention: the composition of the REF panels themselves. While much of the focus rightly centres on shaping fairer metrics and redefining engagement and impact, we should not overlook who is sitting at the table making the judgments.

    If the Research Excellence Framework is to command the trust of the full spectrum of UK higher education institutions, then its panels must reflect the diversity of that spectrum. That means ensuring meaningful representation from a wide range of universities, including Russell Group institutions, pre- and post-92s, specialist colleges, teaching-led universities, and those with strong regional or civic missions.

    Without diverse panel representation, there is a real risk that excellence will be defined too narrowly, inadvertently privileging certain types of research and institutional profiles over others.

    Broadening the lens

    Research excellence looks different in different contexts. A university with a strong regional engagement strategy might produce research that is deeply embedded in local communities, with impacts that are tangible but not easily measured by traditional academic metrics, but with clear international excellence. A specialist arts institution may demonstrate world-leading innovation through creative practice that doesn’t align neatly with standard research output categories.

    The RAND report looking at the impact of research through the lens of the REF 2021 impact cases rightly recognised the importance of “hyperlocality” – and we need to ensure that research and impact is equally recognised in the forthcoming REF exercise.

    UK higher education institutions are incredibly diverse, with different institutions having distinct missions, research priorities, and challenges. REF panels that lack representation from the full spectrum of institutions risks bias toward certain types of research outputs or methodologies, particularly those dominant in elite institutions.

    Dominance of one type of institution on the panels could lead to an underappreciation of applied, practice-based, or interdisciplinary research, which is often produced by newer or specialist institutions.

    Fairness, credibility, and innovation

    Fair assessment depends not only on the criteria applied but also on the perspectives and experiences of those applying them. Including assessors from a wide range of institutional backgrounds helps surface blind spots and reduce unconscious bias. It also allows the panels to better understand and account for contextual factors, such as variations in institutional resources, missions, and community roles, when evaluating submissions.

    Diverse panels also enhance the credibility of the process. The REF is not just a technical exercise; it shapes funding, reputations, and careers. A panel that visibly includes internationally recognised experts from across the breadth of the sector helps ensure that all institutions – and their staff – feel seen, heard, and fairly treated, and that a rigorous assessment of UK’s research prowess is made across the diversity of research outputs whatever their form.

    Academic prestige and structural advantages (such as funding, legacy reputations, or networks) can skew assessment outcomes if not checked. Diversity helps counter bias that may favour research norms associated with more research established institutions. Panel diversity encourages broader thinking about what constitutes excellence, helping to recognize high-quality work regardless of institutional setting.

    Plus there is the question of innovation. Fresh thinking often comes from the edges. A wider variety of voices on REF panels can challenge groupthink and encourage more inclusive and creative understandings of impact, quality, and engagement.

    A test of the sector’s commitment

    This isn’t about ticking boxes. True diversity means valuing the insights and expertise of panel members from all corners of the sector and ensuring they have the opportunity to shape outcomes, not just observe them. It also means recognising that institutional diversity intersects with other forms of diversity, including protected characteristics, professions and career stage, which must also be addressed.

    The REF is one of the most powerful instruments shaping UK research culture. Who gets to define excellence in the international context has a profound impact on what research is done, how it is valued, and who is supported to succeed. REF panels should reflect the diversity of UK HEIs to ensure fairness, credibility, and a comprehensive understanding of research excellence across all contexts.

    If REF 2029 is to live up to the sector’s ambitions for equity, inclusion, and innovation, then we must start with its panels. Without diverse panels, the REF risks perpetuating inequality and undervaluing the full range of scholarly contributions made across the sector, even as it evaluates universities on their own people, culture, and environment. The composition of those panels will be a litmus test for how seriously we take those commitments.

    Source link

  • The risk of unrepresentative REF returns hasn’t gone away

    The risk of unrepresentative REF returns hasn’t gone away

    The much awaited Contributions to Knowledge and Understanding (CKU) guidance for REF 2029 is out, and finally higher education institutions know how the next REF will work for the outputs component of the assessment. Or do they?

    Two of us have written previously about the so-called portability issue, where if a researcher moves to a new institution, it is the new institution to which the research outputs are credited and potentially future REF-derived funding flows.

    We and others have argued that this portability supports the mobility of staff at the beginning of their careers and the mobility of staff that are facing redundancy. We believe that this is an important principle, which should be protected in the design of the current REF. If we believe that the higher education system should nurture talent, then the incentive structure underpinning the REF should align with this principle.

    We maintain that the research, its excellence, and the integrity with which it is performed depends upon the people that undertake it. Therefore, we continue to support some degree of portability as per REF 2021, acknowledging that the situation is complex and that this support of individual careers can come at the expense of the decoupling and the emerging focus on institutions. The exceptions delineated around “longform and/or long process outputs” in the CKU guidance are welcome – the devil will lie in the detail.

    Who the return represents

    Leaving aside portability, the decoupling of outputs from individuals has also resulted in a risk to the diversity of the return, especially in subject areas where the total number of eligible outputs is very high.

    In previous REF exercises the rules were such that the number of outputs any one researcher could return to the department/unit’s submission was restricted (four in REF 2014 and five in REF 2021). This restriction ensured that each unit’s return comprised a diversity of authors, a diversity of subdisciplines and diversity of emerging ideas.

    We recognise that one could argue the REF is an excellence framework, not a diversity framework. However – like many – we believe that REF also has a role to play in supporting the inclusive research community we all wish to champion. REF is also about a diversity – of approaches, of methodologies, of research areas – research needs diversity to ensure the effective teams are in place to deliver on the research questions. What would the impact be on research strategies if individual units increasingly are dominated by a small number of authors?

    How the system plays out

    Of course, the lack of restriction on output numbers does not preclude units from creating a diverse return. However, especially in this time of sector-wide financial pressures, those in charge of a submission may feel they have no option other than to select outputs to maximise the unit score and hence future funding.

    This unbounded selection process will likely lead to intra-unit discord. Even in an ideal case will result in the focus being on outputs covering a subset of hot topics, or worse, subset of perceived high-quality journals. The unintended consequence of this focus could place undue importance on the large research groups led by previously labelled “research stars”. For large HEIs with large units including several of these “stars”, the unit return might still appear superficially diverse, but the underlying return might be remarkably narrow.

    While respecting fully the contribution made by these traditional leaders, we think the health of our research future critically depends upon the championing of the next and diverse generation of researchers and their ideas too. We maintain the limits imposed in previous exercises did this, even if that was not their primary intent.

    Some might, for a myriad of reasons, think that our concerns are misplaced. The publication of the guidance suggest that we have not managed to land these important points around diversity and fairness.

    However, we are sure that many of those who have these views wish to see a diverse REF return too. If we have not persuaded Research England and the other funding councils to reimpose output limits, we urge them at least to ensure that the data is collected as part of the process such that the impact upon the diversity of this unrestricted return can be monitored and hence that future REF exercises can be appropriately informed. This will then allow DSIT and institutions to consider whether the REF process needs to be adjusted in future.

    Our people, their excellence and their diversity, we would argue, matter.

    Source link

  • REF is about institutions not individuals

    REF is about institutions not individuals

    The updated guidance on Contributions to Knowledge and Understanding (CKU: formerly known as outputs) will be seen as the moment it became clear what REF is.

    REF is not about solely, or even mostly, measuring researcher performance. Its primary purpose is to assess how organisations measure research excellence.

    It is the release which signals that research may be produced by individuals but it is assessed at an institutional level and the only measure that matters is whether the institution was responsible for supporting the research that led to the output.

    2014 Redux

    It is worth rehashing how we got here.

    REF is the tool Research England and its devolved equivalents use to decide how much QR funding universities will receive. One thing it measures is the research output of universities. The research output of universities are the outputs of the researchers that work there (or a sample of the outputs.)

    The question that REF has always grappled with is whether to measure the quality of research or the quality of researchers. The latter would be quite a straightforward exercise and one that has been done in different formats over the years. Get a cross-sample of researchers to submit their best research at a given point in time and then ask a panel to rate its quality.

    Depending on the intended policy output the exercise might make every researcher submit some research to ensure a sample is truly representative. It might limit how much any one researcher can submit to ensure a sample is balanced. It might tweak measurements in any number of ways to change what a researcher can submit and when depending on the objectives of the exercise.

    The downside of this approach is that it is not an entirely helpful way to understand the quality of university research across an entire institution. It tells you how good researchers are within a specific field, like a Unit of Assessment, but it does not tell us how good the provider is at creating the conditions in which that research takes place. Unless you believe, and it is not an unreasonable belief, that there is no difference between the aggregate of individual research outputs and the overall quality of institutional research.

    Individuals and teams

    To look at it another way. Jude Bellingham looks very different playing for England than he does Real Madrid. He is still the same footballer with the same skills and same flaws. It is that for Real Madrid he is playing for a team with an ethos of excellence and a history of winning. And for England he is playing for a team that consistently fails to achieve anything of note.

    The only fair way to measure England is not to use Jude Bellingham as a proxy of their performance but to measure the performance of the England team over a defined period of time. In other words, to decouple Bellingham’s performance from England’s overall output.

    As put in a rather punchy blog by Head of REF Policy Jonathan Piotrowski,

    REF 2029 shifts our focus away from the individual and towards the environment where that output was created and how it was supported. This change in perspective is essential for two key reasons: first, to gather the right evidence to inform funding decisions that enable institutions to support more excellent research and second, to fundamentally recognise the huge variety of roles and outputs that contribute to the research ecosystem, including those whose names may not appear as authors and outputs that extend beyond traditional journal publications.

    Who does research?

    The philosophical questions are whether research is created by researchers, institutions, or both and to what degree. And in a complex system involving teams of researchers, businesses, and institutions, whether it is any easier or accurate to ascribe outputs to researchers than it is to institutions. The policy implication is that providers should be less concerned about who is doing research but the conditions in which research occurs. The upshot is that the research labour market will become less dynamic, there is less incentive to appoint people as they are “REFable”, which will have both winners and losers.

    The mechanism for decoupling in REF 2029 is to remove the link between staff and their outputs. The new guidance sets out precisely how this decoupling process will work.

    There will be no staff details submitted and outputs will not be submitted linked to a specific author. Instead, outputs are submitted to a Unit of Assessment. This is not a new idea. The 2016 review of the REF (known as the Stern Review) recommended that

    The non-portability of outputs when an academic moves institution should be helpful to all institutions including smaller institutions with strong teams in particular areas which have previously been potential targets for ‘poaching’.

    However, it is worth emphasising that this is an enormous change from previous practice. In REF 2014 the whole output was captured by whichever institution a researcher was at, at the REF census date. In REF 2021 if a researcher moved between institutions the output was captured by both. In REF 2029 the output will be captured by the institution where there is a “substantive link.”

    Substantive links

    A substantive link will usually be demonstrated by employment of a period of 12 months at least 0.2 FTE equivalent. The staff member does not have to be at the provider at the point the output is submitted. Other indicators may include

    evidence of internal research support (for example, funding for research materials, technical or research support, conference attendance) evidence of work in progress presentations (internally and externally) evidence of an external grant to support a relevant program of research.

    In effect, this means that the link between researchers and REF is that their research took place in a specific institution, but it is ultimately the institution that is being assessed. The thing that is being assessed is the relationship between the research environment and the creation of the output. Not the relationship between the output and the researcher.

    As the focus of assessment shifts so do the rules on what can or cannot be submitted. As we know from previous guidance there is no maximum or minimum submissions from staff members. There may be some researchers at, or who were at, a provider who find their work appears in an institution’s submissions a number of times, and maybe even across disciplines (there will no be now no inter-disciplinary flags but an output may be submitted to more than one UOA and receive different scores.)

    The obvious challenge here is that while providers should submit representative outputs the overriding temptation will be to submit what they believe to be their “best” and then work backwards to justify why it is representative. The REF team have anticipated this problem and the representativeness of a submission will be assessed through the disciplinary led evidence statements. The full guidance on what these contain is yet to be released but we know that

    The important issues of research diversity, demographics and career stages will be assessed as part of the wider disciplinary level evidence statements

    Research England’s position is that aligning outputs to where they are created, not who creates them is a better way to measure institutional research performance. This should also end the incentive for universities to recruit researchers and in doing so capture their REF output. The thinking is that this favours the larger universities that can afford to poach research staff.

    Debates had and debates to come

    In a previous piece for Wonkhe Maria Delgado, Nandini Das, and Miles Padgett made the case that portability is key to fairness in REF. The opposite argument that is being put forward by Research England. Maria, Nandini, and Miles made the case that whether we like it or not one of the ways in which academics secure better career prospects is by improving the REF performance of a provider’s UOA. Research England makes the case that

    The core motivation is to minimise the REFs ability to exert undue influence on people’s careers. To achieve this, institutional funding (remember, QR funding does not track to individuals or departments) should follow the institutions that have genuinely provided and invested in the environment in which research is successful. Environments that recognise the collaborative nature of research and the diverse roles involved, rather than simply rewarding institutions positioned to recruit researchers to get reward for their past output.

    It is possible that both arguments may be right. If outputs are tied to institutions the incentive for institutions who want to do well in REF is to capture a greater number of high quality outputs to include in their submission. The way to do this is to have more researchers supported to do high quality work. On the other hand, at an individual level and in a time of financial crisis for the sector, there are likely some researchers who benefit from being able to take their research output with them when they move institutions.

    In the comments of our initial portability piece it was flagged that researchers’ work could form part of an assessment where they had no relationship with the provider. This feels particularly egregious if they have been made redundant as part of wider cost saving. The message being that the research output is high quality but nonetheless it is necessary to remove your post. The REF team have considered this and

    Outputs where the substantive link occurred before the submitted output was made publicly available, will not be eligible for submission where the author was subject to compulsory redundancy.

    The guidance explains that there may be times where there is a substantive relationship but the research has not yet been published. On the face of it this seems a sensible compromise but if the logic is that a provider is the place where research outputs are created it seems contradictory (albeit kinder) to then limit the conditions through which that work can be assessed. It is possible there will be some outputs which were in the process of being published but not yet assessed which would fall into this clause.

    The guidance confirms a direction of travel that was established as far back as REF 2021 and made clear in the guidance so far for REF 2029. While the debate on who should be assessed in which circumstances continues the wider concern for many will be that there is still significant guidance outstanding, particularly on People Culture and Environment, and the submission window for REF closes in 30 months from now.

    A direction has been set. The sector needs to know the precise rules they are playing by if they are going to go along with it. There is undoubtedly a lot of good will around measuring research environments, culture, and the ways in which outputs are created more comprehensively. That good will, will evaporate if guidance is not timely, clear, or complete.

    Source link

  • Another way of thinking about the national assessment of people, culture, and environment

    Another way of thinking about the national assessment of people, culture, and environment

    There is a multi-directional relationship between research culture and research assessment.

    Poor research assessment can lead to poor research cultures. The Wellcome Trust survey in 2020 made this very clear.

    Assessing the wrong things (such as a narrow focus on publication indicators), or the right things in the wrong way (such as societal impact rankings based on bibliometrics) is having a catalogue of negative effects on the scholarly enterprise.

    Assessing the assessment

    In a similar way, too much research assessment can also lead to poor research cultures. Researchers are one of the most heavily assessed professions in the world. They are assessed for promotion, recruitment, probation, appraisal, tenure, grant proposals, fellowships, and output peer review. Their lives and work are constantly under scrutiny, creating competitive and high-stress environments.

    But there is also a logic (Campbell’s Law) that tells us that if we assess research culture it can lead to greater investment into improving it. And it is this logic that the UK Joint HE funding bodies have drawn on in their drive to increase the weighting given to the assessment of People, Culture & Environment in REF 2029. This makes perfect sense: given the evidence that positive and healthy research cultures are a thriving element of Research Excellence, it would be remiss of any Research Excellence Framework not to attempt to assess, and therefore incentivise them.

    The challenge we have comes back to my first two points. Even assessing the right things, but in the wrong way, can be counterproductive, as may increasing the volume of assessment. Given research culture is such a multi-faceted concept, the worry is that the assessment job will become so huge that it quickly becomes burdensome, thus having a negative impact on those research cultures we want to improve.

    It ain’t what you do, it’s the way that you do it

    Just as research culture is not so much about the research that you do but the way that you do it, so research culture assessment should concern itself not so much with the outcomes of that assessment but with the way the assessment takes place.

    This is really important to get right.

    I’ve argued before that research culture is a hygiene factor. Most dimensions of culture relate to standards that it’s critically important we all get right: enabling open research, dealing with misconduct, building community, supporting collaboration, and giving researchers the time to actually do research. These aren’t things for which we should offer gold stars but basic thresholds we all should meet. And to my mind they should be assessed as such.

    Indeed this is exactly how the REF assessed open research in 2021 (and will do so again in 2029). They set an expectation that 95 per cent of qualifying outputs should be open access, and if you failed to hit the threshold, excess closed outputs were simply unclassified. End of. There were no GPAs for open access.

    In the tender for the PCE indicator project, the nature of research culture as a hygiene factor was recognised by proposing “barrier to entry” measures. The expectation seemed to be that for some research culture elements institutions would be expected to meet a certain threshold, and if they failed they would be ineligible to even submit to REF.

    Better use of codes of practice

    This proposal did not make it into the current PCE assessment pilot. However, the REF already has a “barrier to entry” mechanism, of course, which is the completion of an acceptable REF Code of Practice (CoP).

    An institution’s REF CoP is about how they propose to deliver their REF, not how they deliver their research (although there are obvious crossovers). And REF have distinguished between the two in their latest CoP Policy module governing the writing of these codes.

    But given that REF Codes of Practice are now supposed to be ongoing, living documents, I don’t see why they shouldn’t take the form of more research-focussed (rather than REF-focussed) codes. It certainly wouldn’t harm research culture if all research performing organisations had a thorough research code of practice (most do of course) and one that covers a uniform range of topics that we all agree are critical to good research culture. This could be a step beyond the current Terms & Conditions associated with QR funding in England. And it would be a means of incentivising positive research cultures without ‘grading’ them. With your REF CoP, it’s pass or fail. And if you don’t pass first time, you get another attempt.

    Enhanced use of culture and environment data

    The other way of assessing culture to incentivise behaviours without it leading to any particular rating or ranking is to simply start collecting & surfacing data on things we care about. For example, the requirement to share gender pay gap data and to report misconduct cases, has focussed institutional minds on those things without there being any associated assessment mechanism. If you check out the Higher Education Statistics Agency (HESA) data on proportion of male:female professors, in most UK institutions you can see the ratio heading in the right direction year on year. This is the power of sharing data, even when there’s no gold or glory on offer for doing so.

    And of course, the REF already has a mechanism to share data to inform, but not directly make an assessment, in the form of ’Environment Data’. In REF 2021, Section 4 of an institution’s submission was essentially completed for them by the REF team by extracting from the HESA data, the number of doctoral degrees awarded (4a) and the volume of research income (4b); and from the Research Councils, the volume of research income in kind (4c).

    This data was provided to add context to environment assessments, but not to replace them. And it would seem entirely sensible to me that we identify a range of additional data – such as the gender & ethnicity of research-performing staff groups at various grades – to better contextualise the assessment of PCE, and to get matters other than the volume of research funding up the agendas of senior university committees.

    Context-sensitive research culture assessment

    That is not to say that Codes of Practice and data sharing should be the only means of incentivising research culture of course. Culture was a significant element of REF Environment statements in 2021, and we shouldn’t row back on it now. Indeed, given that healthy research cultures are an integral part of research excellence, it would be remiss not to allocate some credit to those who do this well.

    Of course there are significant challenges to making such assessments robust and fair in the current climate. The first of these is the complex nature of research culture – and the fact that no framework is going to cover every aspect that might matter to individual institutions. Placing boundaries around what counts as research culture could mean institutions cease working on agendas that are important to them, because they ostensibly don’t matter to REF.

    The second challenge is the severe and uncertain financial constraints currently faced by the majority of UK HEIs. Making the case for a happy and collaborative workforce when half are facing redundancy is a tough ask. A related issue here is the hugely varying levels of research (culture) capital across the sector as I’ve argued before. Those in receipt of a £1 million ‘Enhancing Research Culture’ fund from Research England, are likely to make a much better showing than those doing research culture on a shoe-string.

    The third is that we are already half-way through this assessment period and we’re only expected to get the final guidance in 2026 – two years prior to submission. And given the financial challenges outlined above, this is going to make this new element of our submission especially difficult. It was partly for this reason that some early work to consider the assessment of research culture was clear that this should celebrate the ‘journey travelled’, rather than a ‘destination achieved’.

    For this reason, to my mind, the only thing we can reasonably expect all HEIs to do right now with regards to research culture is to:

    • Identify the strengths and challenges inherent within your existing research culture;
    • Develop a strategy and action plan(s) by which to celebrate those strengths and address those challenges;
    • Agree a set of measures by which to monitor your progress against your research culture ambitions. These could be inspired by some of the suggestions resulting from the Vitae & Technopolis PCE workshops & Pilot exercise;
    • Describe your progress against those ambitions and measures. This could be demonstrated both qualitatively and quantitatively, through data and narratives.

    Once again, there is an existing REF assessment mechanism open to us here, and that is the use of the case study. We assess research impact by effectively asking HEIs to tell us their best stories – I don’t see why we shouldn’t make the same ask of PCE, at least for this REF.

    Stepping stone REF

    The UK joint funding bodies have made a bold and sector-leading move to focus research performing organisations’ attention on the people and cultures that make for world-leading research endeavours through the mechanism of assessment. Given the challenges we face as a society, ensuring we attract, train, and retain high quality research talent is critical to our success. However, the assessment of research culture has the power both to make things better or worse: to incentivise positive research cultures or to increase burdensome and competitive cultures that don’t tackle all the issues that really matter to institutions.

    To my mind, given the broad range of topics that are being worked on by institutions in the name of improving research culture, and where we are in the REF cycle, and the financial constraints facing the sector, we might benefit from a shift in the mechanisms proposed to assess research culture in 2029 and to see this as a stepping stone REF.

    Making better use of existing mechanisms such as a Codes of Practice and Environment and Culture data would assess the “hygiene factor” elements of culture without unhelpfully associating any star ratings to them. Ratings should be better applied to the efforts taken by institutions to understand, plan, monitor, and demonstrate progress against their own, mission-driven research culture ambitions. This is where the real work is and where real differentiations between institutions can be made, when contextually assessed. Then, in 2036, when we can hope that the sector will be in a financially more stable place, and with ten years of research culture improvement time behind us, we can assess institutions against their own ambitions, as to whether they are starting to move the dial on this important work.

    Source link