Category: research culture

  • Action on researcher career development must go beyond surface-level fixes

    Action on researcher career development must go beyond surface-level fixes

    The Concordat to Support the Career Development of Researchers was designed to drive culture change, not compliance. However, many institutional action plans suggest institutions are meeting the letter rather than the spirit of its commitments.

    Financial constraints and the evolving REF 2029 people, culture and environment (PCE) guidance are shaping how institutions support research staff, and universities face a choice: stick with the easy, surface-level interventions that look good on paper, or commit to the tougher, long-term changes that could truly improve research careers.

    The latter is difficult, resource-intensive, and politically fraught – but it is the only route to a research culture that is genuinely sustainable.

    Progress and pressures

    There has been real progress in embedding researcher development in UK higher education. The 2019 review of the concordat highlighted expanded training opportunities, strengthened mentoring schemes, and, crucially, the integration of researcher development into institutional strategies and governance. Many institutions have since used its principles to shape research culture action plans and strategies.

    This progress has been uneven, however. Access to high-quality training and development opportunities varies across the sector, particularly for researchers in smaller, less well-resourced institutions. In addition, new initiatives frequently lack long-term sustainability beyond initial funding.

    Institutional action plans tend to emphasise soft politics – awards, charters and resource hubs – which, while useful, may function as reputational signals more than mechanisms for change. Meanwhile, the concordat’s more challenging commitments, like improving job security, workload management, and the visibility of career pathways across sectors, receive less attention.

    Financial constraints and shifting priorities

    Universities are operating in an era of financial constraint forcing difficult decisions about what can be sustained and what must be scaled back. These financial pressures are already reshaping researcher development and career pathways, with potentially lasting consequences:

    Shift toward low-cost interventions: Institutions may prioritise training, mentoring, and “off the shelf” development workshops as the most financially viable options, while more complex reforms – such as improving career pathways, addressing workload pressures, and ensuring meaningful career learning – are pushed aside.

    Growing precarity and inequity in research careers: With the risk of non-renewal of fixed-term contracts and rising redundancies, instability may increase. The effects will likely be unequal – early-career researchers, those with caring responsibilities, and underrepresented groups are usually most affected in such situations, with workload pressures further widening existing inequities in career progression and retention.

    Shifts in career trajectories: Financial pressures will push more researchers to seek opportunities beyond academia, not always by choice but due to diminishing prospects within universities. This is not in itself a bad thing, but the absence of robust career tracking data, limited engagement with non-academic sectors, and a lack of structured support for diverse pathways mean that institutions risk making decisions in a vacuum.

    Without a clear understanding of where researchers go and what they need to thrive, researcher development may become misaligned with market realities – undermining both retention and outcomes. Initiatives like CRAC-Vitae’s new UK research career tracking initiative aim to close this critical evidence gap.

    What makes researcher development sustainable?

    What will actually make researcher development sustainable? The answer isn’t simply more initiatives, or cheaper ones – it’s about embedding development in institutional culture and building on evidence of what works. That means making time for development activities, creating space for strategic reflection, and encouraging researchers to learn from one another – not just offering mentoring or reciprocal schemes in isolation. Vitae’s refreshed Researcher Development Framework sets out the full breadth of what this encompasses.

    Researcher development doesn’t necessarily require large budgets. Much of it comes down to embedding development in the culture: time to pursue meaningful opportunities, support from line managers and supervisors to do so, and the ability to learn in community with others. Yet in times of crisis, workloads tend to rise – and it’s often this development time that’s seen as non-essential and cut. Around half of research staff do not have time to invest in professional development – demonstrating just how limited that space already is.

    These overlapping pressures are pushing institutions to make trade-offs – but it’s clear that the most effective and sustainable approaches to researcher development will depend not just on resource levels, but on institutional priorities and strategic leadership.

    Unmet expectations

    At the same time, the ongoing review of sector-wide concordats and agreements, meant to clarify priorities and improve alignment, seems to have stalled – raising concerns about whether it will lead to meaningful action. The Researcher Development Concordat Strategy Group, tasked with overseeing implementation and strategic coordination, has also been quiet over the last year, though the new chair has recently signalled renewed commitment to its activities.

    This stagnation raises questions about the long-term value of the concordat, particularly in a landscape where institutions are grappling with resource constraints. Without strong leadership and coordinated sector-wide action, there is a real risk that institutions will continue to take a fragmented, compliance-driven approach rather than pursuing deeper reform.

    If the concordat is to remain relevant, it must address the structural issues it currently skirts around – particularly those related to researcher employment conditions, workload sustainability, and career progression. Without this, it risks becoming another well-intentioned initiative that falls short of delivering real sector-wide change.

    PCE and the concordat

    The introduction of people, culture and environment (PCE) in REF 2029 was intended to shift the sector’s focus from research outputs to the broader conditions that enable research excellence. However, the way institutions interpret these requirements is critical.

    REF PCE has the potential to drive meaningful change – but only if institutions use it as a platform for genuine reflection rather than a showcase of best practices.

    PCE and the concordat share several ambitions: both emphasise inclusive research environments, professional development, and supporting leadership at all career stages. The concordat’s focus on employment conditions, researcher voice, and long-term career development also aligns with PCE’s emphasis on institutional responsibility for research culture.

    This coherence is no accident – PCE was co-developed with the sector, and the concordats and agreements review recognised the overlaps between existing frameworks.

    If institutions take a strategic, integrated approach, REF PCE could reinforce and enhance existing concordat commitments rather than becoming another compliance exercise. However, this requires institutions to go beyond superficial reporting and demonstrate tangible improvements in the working conditions and career pathways of researchers.

    A call to action

    If institutions want to move beyond just ticking boxes, they need to take bold, practical steps.

    Job security must be redefined in the current climate. Researcher development should not just focus on career skills and knowledge but on career sustainability, accountability, and agility. While reducing reliance on fixed-term contracts remains a long-term goal, immediate priorities must also include clearer career progression routes (within and beyond higher education), cross-sector mobility, and support for career transitions.

    Workload and pay transparency need urgent attention. As researchers face increasing uncertainty about their career trajectories, solutions must go beyond surface-level fixes. This requires coordinated policy reform at both institutional and sector levels, including meaningful workload management strategies, transparent pay equity audits, and governance processes that embed researcher voices. While wellbeing initiatives have value, they are not a substitute for structural reform.

    Finally, the role of the concordat strategy group must evolve in response to the current climate. With institutions facing severe financial constraints and a shrinking research workforce, the group must take a more proactive role in advocating for sustainable researcher careers. This includes setting clearer expectations for institutions, addressing gaps in employment stability, and ensuring that commitments to researcher development are not lost amid cost-cutting measures. Without stronger leadership at the sector level, there is a risk that the concordat will become little more than a bureaucratic exercise, rather than a meaningful driver of change.

    Source link

  • How our researchers are using AI – and what we can do to support them

    How our researchers are using AI – and what we can do to support them

    We know that the use of generative AI in research is now ubiquitous. But universities have limited understanding of who is using large language models in their research, how they are doing so, and what opportunities and risks this throws up.

    The University of Edinburgh hosts the UK’s first, and largest, group of AI expertise – so naturally, we wanted to find out how AI is being used. We asked our three colleges to check in on how their researchers were using generative AI, to inform what support we provide, and how.

    Using AI in research

    The most widespread use, as we would expect, was to support communication: editing, summarising and translating texts or multimedia. AI is helping many of our researchers to correct language, improve clarity and succinctness, and transpose text to new mediums including visualisations.

    Our researchers are increasingly using generative AI for retrieval: identifying, sourcing and classifying data of different kinds. This may involve using large language models to identify and compile datasets, bibliographies, or to carry out preliminary evidence syntheses or literature reviews.

    Many are also using AI to conduct data analysis for research. Often this involves developing protocols to analyse large data sets. It can also involve more open searches, with large language models detecting new correlations between variables, and using machine learning to refine their own protocols. AI can also test complex models or simulations (digital twins), or produce synthetic data. And it can produce new models or hypotheses for testing.

    AI is of course evolving fast, and we are seeing the emergence of more niche and discipline-specific tools. For example, self taught reasoning models (STaRs) can generate rationales that can be fine-tuned to answer a range of research questions. Or retrieval augmented generation (RAG) can enable large language models to access external data that enhances the breadth and accuracy of their outputs.

    Across these types of use, AI can improve communication and significantly save time. But it also poses significant risks, which our researchers were generally alert to. These involve well-known problems with accuracy, bias and confabulation – especially where researchers use AI to identify new (rather than test existing) patterns, to extrapolate, or to underpin decision-making. There are also clear risks around sharing of intellectual property with large language models. And not least, researchers need to clearly attribute the use of AI in their research outputs.

    The regulatory environment is also complex. While the UK does not as yet have formal AI legislation, many UK and international funders have adopted guidelines and rules. For example, the European Union has a new AI Act, and EU funded projects need to comply with European Commission guidelines on AI.

    Supporting responsible AI

    Our survey has given us a steer on how best to support and manage the use of AI in research – leading us to double down on four areas that require particular support:

    Training. Not surprisingly the use of generative AI is far more prevalent among early career researchers. This raises issues around training, supervision and oversight. Our early career researchers need mentoring and peer support. But more senior researchers don’t necessarily have the capacity to keep pace with the rapid evolution of AI applications.

    This suggests the need for flexible training opportunities. We have rolled out a range of courses, including three new basic AI courses to get researchers started in the responsible use of AI in research, and online courses on ethics of AI.

    We are also ensuring our researchers can share peer support. We have set up an AI Adoption Hub, and are developing communities of practice in key areas of AI research – notably research in AI and Health which is one of the most active areas of AI research. A similar initiative is being developed for AI and Sustainability.

    Data safety. Our researchers are rightly concerned about feeding their data into large language models, given complex challenges around copyright and attribution. For this reason, the university has established its own interface with the main open source large language models including ChatGPT – the Edinburgh Language Model (ELM). ELM provides safer access to large language model, operating under a “zero data retention” agreement so that data is not retained by Open AI. We are encouraging our researchers to develop their own application programming interfaces (APIs), which allow them to provide more specific instructions to enhance their results.

    Ethics. AI in research throws up a range of challenges around ethics and integrity. Our major project on responsible AI, BRAID, and ethics training by the Institute for Academic Development, provide expertise on how we adapt and apply our ethics processes to address the challenges. We also provide an AI Impact Assessment tool to help researchers work through the potential ethical and safety risks in using AI.

    Research culture. The use of AI is ushering in a major shift in how we conduct research, raising fundamental questions about research integrity. When used well, generative AI can make researchers more productive and effective, freeing time to focus on those aspects of research that require critical thinking and creativity. But they also create incentives to take short cuts that can compromise the rigour, accuracy and quality of research. For this reason, we need a laser focus on quality over quantity.

    Groundbreaking research is not done quickly, and the most successful researchers do not churn out large volumes of papers – the key is to take time to produce robust, rigorous and innovative research. This is a message that will be strongly built into our renewed 2026 Research Cultures Action Plan.

    AI is helping our researchers drive important advances that will benefit society and the environment. It is imperative that we tap the opportunities of AI, while avoiding some of the often imperceptible risks in its mis-use. To this end, we have decided to make AI a core part of our Research and Innovation Strategy – ensuring we have the right training, safety and ethical standards, and research culture to harness the opportunities of this exciting technology in an enabling and responsible way.

    Source link

  • Supporting the careers of researchers means innovation, not isolation

    Supporting the careers of researchers means innovation, not isolation

    The phrase attributed to Sir Isaac Newton, “if I have seen further, it is because I have stood on the shoulders of giants,” is often used as a metaphor for research and innovation: how each great thinker builds on the thoughts and research of others, the unending column of prize winners and esteemed fellows pursuing academic endeavour.

    However, the environment I sought as a researcher and aim to enable as a university leader is more of a supportive collective, certainly one with a much less precarious base.

    Perhaps the most important lessons learnt during my own research career was that the giants of research, innovation and knowledge exchange whose shoulders we are more often standing on are not the senior staff but rather the PhD students, early career researchers, postdoctoral fellows and technicians, who turn challenging questions posed into the most exciting innovative answers. And often without the bias of doing things the way we have in the past.

    Untangling

    Achieving the UK’s priority of innovation and the growth it drives requires a long-range vision to set direction matched with agility to rapidly pivot as new opportunities arise. This agility needs a skilled research workforce and the attraction of the brightest minds into roles at all stages of a research and innovation career.

    However, these giants, whose shoulders we balance UK innovation on, need long-term confidence to initiate a career which currently has precarity baked in. Growing investment to support research and innovation is needed, but investment in equipment, facilities and consumables will not succeed without engaged and enabling expertise.

    Alongside this, regional disparity of funding, low research cost recovery, and increasing regulatory demands are posing the question of how much research can any university afford to undertake. The simple answer may appear to be to do less, or divert funding to specialist institutes without dual responsibility for teaching – however, this would undermine the agility that is underpinned by broad expertise, civic and industrial partnerships and infrastructure which resides across our higher education institutions.

    Fixing this knotty problem needs a systematic approach, balancing external and internal funding alongside improved recovery of the true cost of research. With restrictions in the sector and reduced internal funding impacting decisions, it is imperative to not forget the essential role of the precarious base on which our research activity in the UK is built – and to support it accordingly.

    Concordat priorities

    My commitment to career development and recognition of researchers is why I am excited to be continuing the great work led by Julia Buckingham as the incoming chair of the Researcher Development Concordat Strategy Group, which oversees the Researcher Development Concordat.

    The concordat was first published in 2019, building on agreements of funding bodies and universities over a decade earlier. The current signatories are over 100 higher education and research institutes, who commit to the principles of environment and culture, employment, and career development for researchers in our institutions and 17 funding agencies who set grant holder requirements relating to the concordat commitments.

    The concordat has recently undergone a review which identified future areas of focus to achieve continued effectiveness. Three priorities were identified:

    First, agreeing a set of shared principles to define the characteristics of a positive environment for research culture, and second, working to a shared set of research culture values with measurable indicators of progress. We seek to align a set of shared broad principles to define the characteristics of a positive environment for research culture. While these must link to the REF people, culture and environment measures, they need to be high-level shared principles and ensure that they define measurable indicators of progress to avoid confusion across multiple agendas. These also need to be high enough level to ensure a collective agreement to deliver whilst also accommodating the diversity and breadth of higher education institutions and research organisations.

    The third priority is simplifying the bureaucracy. This is essential in a sector with ever-growing demands of attention and associated costs to deliver. While we must maintain accountability, we need to simplify the bureaucracy to work in service of our principles and values, not dictate them. In short, we must simplify for our communities how the different national concordats can complement rather than compete for attention. To achieve this, we are reviewing and reforming reporting requirements to achieve better alignment and to incorporate them into existing reporting where possible. We are working with other bodies to align data and reporting requirements.

    I am also keen to work with industry body representatives to understand and reduce barriers to the movement of careers from academia to industry and vice versa. This porosity of career is needed for both innovation and rapid business adoption of innovative ideas. For this porosity to support innovation and growth we also need to enhance engagement from the industry to support researchers throughout a changing career.

    While this work is delivered by the concordat strategy group, the concordat is collectively owned by the sector and continued engagement is needed to ensure the concordat is fit for purpose. Given this, we are looking for engagement in future work, more details about which can be found on the concordat webpage. I look forward to working with higher education institutions, industry, funders, the Researcher Development Concordat Strategy Group, and individuals to deliver our collective commitments.

    The Researcher Development Concordat Strategy Group secretariat is jointly funded through funding bodies from the four nations: Research England, the Scottish Funding Council, Medr (previously HEFCW), and the Department for the Economy in Northern Ireland. I thank them for their continued support.

    Source link

  • Another way of thinking about the national assessment of people, culture, and environment

    Another way of thinking about the national assessment of people, culture, and environment

    There is a multi-directional relationship between research culture and research assessment.

    Poor research assessment can lead to poor research cultures. The Wellcome Trust survey in 2020 made this very clear.

    Assessing the wrong things (such as a narrow focus on publication indicators), or the right things in the wrong way (such as societal impact rankings based on bibliometrics) is having a catalogue of negative effects on the scholarly enterprise.

    Assessing the assessment

    In a similar way, too much research assessment can also lead to poor research cultures. Researchers are one of the most heavily assessed professions in the world. They are assessed for promotion, recruitment, probation, appraisal, tenure, grant proposals, fellowships, and output peer review. Their lives and work are constantly under scrutiny, creating competitive and high-stress environments.

    But there is also a logic (Campbell’s Law) that tells us that if we assess research culture it can lead to greater investment into improving it. And it is this logic that the UK Joint HE funding bodies have drawn on in their drive to increase the weighting given to the assessment of People, Culture & Environment in REF 2029. This makes perfect sense: given the evidence that positive and healthy research cultures are a thriving element of Research Excellence, it would be remiss of any Research Excellence Framework not to attempt to assess, and therefore incentivise them.

    The challenge we have comes back to my first two points. Even assessing the right things, but in the wrong way, can be counterproductive, as may increasing the volume of assessment. Given research culture is such a multi-faceted concept, the worry is that the assessment job will become so huge that it quickly becomes burdensome, thus having a negative impact on those research cultures we want to improve.

    It ain’t what you do, it’s the way that you do it

    Just as research culture is not so much about the research that you do but the way that you do it, so research culture assessment should concern itself not so much with the outcomes of that assessment but with the way the assessment takes place.

    This is really important to get right.

    I’ve argued before that research culture is a hygiene factor. Most dimensions of culture relate to standards that it’s critically important we all get right: enabling open research, dealing with misconduct, building community, supporting collaboration, and giving researchers the time to actually do research. These aren’t things for which we should offer gold stars but basic thresholds we all should meet. And to my mind they should be assessed as such.

    Indeed this is exactly how the REF assessed open research in 2021 (and will do so again in 2029). They set an expectation that 95 per cent of qualifying outputs should be open access, and if you failed to hit the threshold, excess closed outputs were simply unclassified. End of. There were no GPAs for open access.

    In the tender for the PCE indicator project, the nature of research culture as a hygiene factor was recognised by proposing “barrier to entry” measures. The expectation seemed to be that for some research culture elements institutions would be expected to meet a certain threshold, and if they failed they would be ineligible to even submit to REF.

    Better use of codes of practice

    This proposal did not make it into the current PCE assessment pilot. However, the REF already has a “barrier to entry” mechanism, of course, which is the completion of an acceptable REF Code of Practice (CoP).

    An institution’s REF CoP is about how they propose to deliver their REF, not how they deliver their research (although there are obvious crossovers). And REF have distinguished between the two in their latest CoP Policy module governing the writing of these codes.

    But given that REF Codes of Practice are now supposed to be ongoing, living documents, I don’t see why they shouldn’t take the form of more research-focussed (rather than REF-focussed) codes. It certainly wouldn’t harm research culture if all research performing organisations had a thorough research code of practice (most do of course) and one that covers a uniform range of topics that we all agree are critical to good research culture. This could be a step beyond the current Terms & Conditions associated with QR funding in England. And it would be a means of incentivising positive research cultures without ‘grading’ them. With your REF CoP, it’s pass or fail. And if you don’t pass first time, you get another attempt.

    Enhanced use of culture and environment data

    The other way of assessing culture to incentivise behaviours without it leading to any particular rating or ranking is to simply start collecting & surfacing data on things we care about. For example, the requirement to share gender pay gap data and to report misconduct cases, has focussed institutional minds on those things without there being any associated assessment mechanism. If you check out the Higher Education Statistics Agency (HESA) data on proportion of male:female professors, in most UK institutions you can see the ratio heading in the right direction year on year. This is the power of sharing data, even when there’s no gold or glory on offer for doing so.

    And of course, the REF already has a mechanism to share data to inform, but not directly make an assessment, in the form of ’Environment Data’. In REF 2021, Section 4 of an institution’s submission was essentially completed for them by the REF team by extracting from the HESA data, the number of doctoral degrees awarded (4a) and the volume of research income (4b); and from the Research Councils, the volume of research income in kind (4c).

    This data was provided to add context to environment assessments, but not to replace them. And it would seem entirely sensible to me that we identify a range of additional data – such as the gender & ethnicity of research-performing staff groups at various grades – to better contextualise the assessment of PCE, and to get matters other than the volume of research funding up the agendas of senior university committees.

    Context-sensitive research culture assessment

    That is not to say that Codes of Practice and data sharing should be the only means of incentivising research culture of course. Culture was a significant element of REF Environment statements in 2021, and we shouldn’t row back on it now. Indeed, given that healthy research cultures are an integral part of research excellence, it would be remiss not to allocate some credit to those who do this well.

    Of course there are significant challenges to making such assessments robust and fair in the current climate. The first of these is the complex nature of research culture – and the fact that no framework is going to cover every aspect that might matter to individual institutions. Placing boundaries around what counts as research culture could mean institutions cease working on agendas that are important to them, because they ostensibly don’t matter to REF.

    The second challenge is the severe and uncertain financial constraints currently faced by the majority of UK HEIs. Making the case for a happy and collaborative workforce when half are facing redundancy is a tough ask. A related issue here is the hugely varying levels of research (culture) capital across the sector as I’ve argued before. Those in receipt of a £1 million ‘Enhancing Research Culture’ fund from Research England, are likely to make a much better showing than those doing research culture on a shoe-string.

    The third is that we are already half-way through this assessment period and we’re only expected to get the final guidance in 2026 – two years prior to submission. And given the financial challenges outlined above, this is going to make this new element of our submission especially difficult. It was partly for this reason that some early work to consider the assessment of research culture was clear that this should celebrate the ‘journey travelled’, rather than a ‘destination achieved’.

    For this reason, to my mind, the only thing we can reasonably expect all HEIs to do right now with regards to research culture is to:

    • Identify the strengths and challenges inherent within your existing research culture;
    • Develop a strategy and action plan(s) by which to celebrate those strengths and address those challenges;
    • Agree a set of measures by which to monitor your progress against your research culture ambitions. These could be inspired by some of the suggestions resulting from the Vitae & Technopolis PCE workshops & Pilot exercise;
    • Describe your progress against those ambitions and measures. This could be demonstrated both qualitatively and quantitatively, through data and narratives.

    Once again, there is an existing REF assessment mechanism open to us here, and that is the use of the case study. We assess research impact by effectively asking HEIs to tell us their best stories – I don’t see why we shouldn’t make the same ask of PCE, at least for this REF.

    Stepping stone REF

    The UK joint funding bodies have made a bold and sector-leading move to focus research performing organisations’ attention on the people and cultures that make for world-leading research endeavours through the mechanism of assessment. Given the challenges we face as a society, ensuring we attract, train, and retain high quality research talent is critical to our success. However, the assessment of research culture has the power both to make things better or worse: to incentivise positive research cultures or to increase burdensome and competitive cultures that don’t tackle all the issues that really matter to institutions.

    To my mind, given the broad range of topics that are being worked on by institutions in the name of improving research culture, and where we are in the REF cycle, and the financial constraints facing the sector, we might benefit from a shift in the mechanisms proposed to assess research culture in 2029 and to see this as a stepping stone REF.

    Making better use of existing mechanisms such as a Codes of Practice and Environment and Culture data would assess the “hygiene factor” elements of culture without unhelpfully associating any star ratings to them. Ratings should be better applied to the efforts taken by institutions to understand, plan, monitor, and demonstrate progress against their own, mission-driven research culture ambitions. This is where the real work is and where real differentiations between institutions can be made, when contextually assessed. Then, in 2036, when we can hope that the sector will be in a financially more stable place, and with ten years of research culture improvement time behind us, we can assess institutions against their own ambitions, as to whether they are starting to move the dial on this important work.

    Source link