Category: Research

  • Re-thinking research support for English universities: Research England’s programme of work during the REF 2029 pause

    Re-thinking research support for English universities: Research England’s programme of work during the REF 2029 pause

    In September, Science Minister Lord Vallance announced a pause to developing REF 2029 to allow REF and the funding bodies to take stock. Today, REF 2029 work resumes with a refreshed focus to support a UK research system that delivers knowledge and innovation with impact, improving lives and creating growth across the country.

    Research England has undertaken a parallel programme of work during the pause, intended to deliver outcomes that align with Government’s priorities and vision for higher education as outlined in the recently published Post-16 Education and Skills white paper. Calling this a pause doesn’t reflect the complexity, pace and challenge faced in delivering the programme over the last three months.

    Since September, we have:

    • explored the option of baseline performance in research culture being a condition of funding
    • considered how our funding allocation mechanisms in England could be modified to better reward quality, as part of our ongoing review of Strategic Institutional Research Funding (SIRF)
    • fast-tracked existing activity related to the allocation of mainstream quality-related research funding (QR).
    • developed our plans to consider the future of research assessment.

    Over the last three months to progress this work, we’ve engaged thoughtfully with groups across the English higher education and research sector, as well as with the devolved funding bodies, to help us understand the wider context and refine our approaches. Let me outline where we’ve got to – and where we’re going next – with the work we’ve been doing.

    Setting a baseline for research cultures

    Each university, department and team are unique. They have their own values, priorities and ways of working. I therefore like to think of ‘research cultures and environments’, using the term in plural, to reflect this diversity. The report of the REF People, Culture and Environment pilot, also published today, confirms that there is excellent practice in this area across the higher education sector. REF 2029 offers an opportunity to recognise and reward those institutions and units that are creating the open, inclusive and collaborative environments that enable excellent research and researchers to thrive.

    At the same time, we think there are some minimum standards that should be expected of all providers in receipt of public funding. To promote these standards, we will be strengthening the terms and conditions of Research England funding related to research culture. In the first instance, this will mean a shift from expecting certain standards to be met, to requiring institutions to meet them.

    We are very conscious not to increase burden on the sector or create unnecessary bureaucracy. This will only succeed by engaging closely with the sector to understand how this can work effectively in practice. To this end, we will be engaging with groups in early 2026 to establish rigorous standards that are relevant across the diversity of English institutions. As far as possible, we will use existing reporting mechanisms such as the annual assurance report provided by signatories to the Research Integrity Concordat. While meeting the conditions will not be optional, we will support institutions that don’t yet meet all the requirements, working together and utilising additional reporting to help with and monitor improvements. And because research cultures aren’t static, we will evolve our conditions over time to reflect changes in the sector.

    This will lead to sector-wide improvements that we can all get behind:

    • support for everyone who contributes to excellent and impactful research: researchers, technicians and others in vital research-enabling roles, across all career stages
    • ensuring research in England continues to be done with integrity
    • ensuring that is also done openly
    • strengthening responsible research assessment.

    Our next steps are to engage with the sector and relevant groups as part of the process of making changes to our terms and conditions of funding, and to establish low-burden assurance mechanisms. For example, working as part of the Researcher Development Concordat Strategy Group, we will collectively streamline and strengthen the concordat, making it easier for institutions to implement this important cross-sectoral agreement.

    These changes will complement the assessment of excellent research environments in the REF and the inspiring practice we see across the sector. Championing vibrant research cultures and environments is a mission that transcends the REF — it’s the foundation for maintaining and enhancing the UK’s world-leading research, and we will continue to work with the devolved funding bodies to fulfil the mission.

    Modelling funding mechanisms

    The formula-based, flexible research funding Research England distributes to English universities is crucial to underpinning the HE research landscape, and supporting the

    financial sustainability of the sector. We are aware that that this funding is increasingly being spread more thinly.

    As part of the review of strategic institutional research funding (SIRF), we are working to understand the wider effectiveness of our funding approaches and consider alternative allocation mechanisms. Work on this review is continuing at speed. We will provide an update to the sector next year on progress, as well as the publication of the independent evaluation of SIRF, anticipated in early 2026.

    Building on this, we have been considering how our existing mechanisms in England could be modified to better reward quality of research. This work looks at how different strands of SIRF – from mainstream QR to specialist provider funding – overlap, and how that affects university finances across English regions and across institution types. We are continuing to explore options for refining our mainstream QR formula and considering the consequences of those different options. This is a complex piece of work, requiring greater time and attention, and we expect next year to be a key period of engagement with the sector.

    The journey ahead

    While it may seem early to start thinking about assessment after REF 2029, approaches to research assessment are evolving rapidly and it is important that we are able to embrace the opportunities offered by new technologies and data sources when the moment comes. We have heard loud and clear that early clarity on guidance reduces burden for institutions and we want to be ready to offer that clarity. A programme of work that maximises the opportunity offered by REF 2029 to shape the foundation for future frameworks will be commencing in spring 2026.

    Another priority will be to consider how Research England as the funding body for England, and as part of UKRI, can support the government’s aim to encourage a greater focus on areas of strength in the English higher education sector, drawing on the excellence within all our institutions. As I said at the ARMA conference earlier in the year, there is a real opportunity for universities to identify and focus on the unique contributions they make in research.

    The end of the year will provide the sector (and my colleagues in Research England and the REF teams) with some much-needed rest. January 2026 will see us pick back up a reinvigorated SIRF review, informed by the REF pause activity. We will continue to refine our research funding and policy to – as UKRI’s new mission so deftly puts it – advance knowledge, improve lives and drive growth.

    Source link

  • Before we automate REF there are three issues we need to talk about

    Before we automate REF there are three issues we need to talk about

    The long-awaited REF-AI report prompts the sector to imagine an increasingly automated REF, but also leaves several important questions unanswered about what such a future might mean for the people and practices that underpin research assessment. Before we embed AI more deeply into REF2029, we need to pause and reflect on three issues that deserve much greater attention, starting with the long-term risks to disciplinary expertise.

    Long-term impacts: Efficiency gains and the risk of skills erosion

    Recommendation 15 in the report proposes that: “REF assessments should include a human verification step… confirming that final judgements rest on human academic expertise.”

    This feels sensible on the surface. But the longer-term implications warrant more attention. Across many sectors, evidence shows that when automation takes on tasks requiring expert judgement, human expertise can slowly erode as roles shift from analysis to oversight. The report itself recognises this trend when discussing labour substitution and task reallocation.

    REF processes already rely heavily on signals, heuristics and proxies, particularly under time pressure. Introducing AI may further reduce opportunities for deep disciplinary reading in panel work. If this happens, then by the 2030s or 2040s, the experts needed to meaningfully verify AI-generated assessments may become harder to sustain.

    This is not an argument against using AI, but rather a suggestion that we need to consider the long-term stewardship of disciplinary expertise, and ensure that any AI integration strengthens, rather than displaces, human judgement. We don’t yet have expertise in how to collaborate effectively with AI systems and their outputs. This needs to be developed as a conscious endeavour to ensure that AI supports research assessment responsibly.

    Learning from Responsible Research Assessment (RRA)

    Over more than a decade, frameworks such as DORA, CoARA, the Hong Kong Principles and the Leiden Manifesto have laid out clear principles for responsible use of quantitative indicators, transparency, equity, and disciplinary diversity. The REF-AI report notes that in the interviews conducted: “Seldom was mention made of responsible research assessment initiatives such as DORA and CoARA… There is no clear view that the deployment of GenAI tools in the REF is antithetical to the ambitions of such initiatives.” But the absence of discussion in the focus groups does not necessarily mean a positive alignment, it may simply indicate that RRA principles were not a prominent reference point in the design or facilitation of the project.

    A fuller analysis could explore how AI intersects with core RRA questions, including: i) How do we assess what we value, not just what is machine-readable? ii) How do we prevent AI from amplifying systemic inequities? iii) How do we ensure transparency in systems underpinned by proprietary models? and iv) How do we avoid metrics-by-stealth re-entering the REF through automated tools? These considerations are essential, not peripheral, to thinking about AI in research assessment.

    Representation: A report on bias that overlooks some of its own challenges

    Finally, representation. As the authors have acknowledged themselves, it is hard to ignore that the authorship team comprises four men, three of which are senior and white. This matters, not as a criticism of the individuals involved, but because who examines AI uptake shapes how issues of bias, fairness and inclusion are framed. Generative AI systems are widely acknowledged as being trained on text that contains gendered, racialised and geographical biases; the report also notes that: “Concerns of bias and inaccuracy related to GenAI tools are widely acknowledged…” What is less evident, however, is a deeper engagement with how these biases might play out within a national research assessment exercise that already shows uneven outcomes for different groups.

    A similar issue arises in the dataset. Half of the interviewees were from Russell Group institutions, despite the Russell Group representing around 15 per cent of REF-submitting HEIs. The report itself notes that experimentation with AI is concentrated in well-resourced institutions: “Variation in experimentation with GenAI tools is mainly influenced… by institutional resource capacity.”

    Given this, the weighting of the sample will skew the perspectives represented. This does not necessarily invalidate the findings, but it does raise questions about whether further, broader consultation would strengthen confidence in the conclusions drawn.

    Doing it better?

    The report does an excellent job of surfacing current institutional anxieties. Larger, well-resourced universities appear more open to integrating AI into REF processes; others are more cautious. Survey findings suggest notable scepticism among academics, particularly in Arts, Humanities and Social Sciences. Despite this, the report signals a direction of travel in which REF “inevitably” becomes AI-enabled and eventually “fully automated.” Whether this future is desirable, or indeed equitable, remains an open question.

    The REF-AI report is therefore best read as an important starting point. For the next phase, it will be vital that Research England broadens the conversation to include a wider diversity of voices, including experts in equality and inclusion, disciplinary communities concerned about long-term skills, those with deep experience in RRA, smaller institutions, and early career researchers who will inherit whatever system emerges.

    This more diverse team must be given licence to make bold decisions about not just what’s inevitable but what’s desirable for the research ecosystem the REF ultimately seeks to monitor and shape. We cannot simply pay lip service to principles of responsible research assessment, equity, diversity and inclusion, and ignore the resulting outcomes of the decision-making processes shaped by those principles.

    AI will undoubtedly shape aspects of future research governance and assessments. The challenge, now, is to ensure that its integration reflects sector values, not just technological possibility.

    Source link

  • Labour must not repeat history by sidelining research in post-92 universities

    Labour must not repeat history by sidelining research in post-92 universities

    As Labour eyes reshaping the higher education sector, it risks reviving a binary divide that history shows would weaken UK research.

    While there is much to admire in the post-16 education and skills white paper regarding the vision for upskilling the population, there are some more difficult proposals. There in the shadows lies the call for HE institutions to specialise, with the lurking threat that many will lose their research funding in some, but perhaps many, areas, in order to better fund those with more intensive research.

    The threat resides in the very phrasing used to describe research funding reform in the white paper, the “strategic distribution of research activity across the sector” to ensure institutions are “empowered to build deep expertise in areas where they can lead.” What is the benchmark here for judging whether someone can lead?

    It raises once again the question: should non-intensive research institutions – by which I largely here mean post-92 universities – undertake research at all?

    Since the paper came out, both Secretary of State for Science, Innovation and Technology Liz Kendall and science minister Sir Patrick Vallance have stressed that this “specialisation” will not privilege the traditional elite institutions, with Sir Patrick describing as “very bizarre” the idea that prioritisation necessarily means concentration of power in a few universities.

    Liz Kendall echoes this logic, framing strategically focused funding as akin to a “no-compromise approach,” similar to investing more intensely in select Olympic sports to win medals rather than spreading resources thinly over many.

    Yet for many post-92 institutions, this re-engineering of UK research funding spells very real danger. Under a model that favours “deep expertise” in fewer, strongly performing institutions, funding for more broadly based teaching and research universities risks erosion. The very students and communities that post-92 universities serve – often more diverse, more regional, and less elite – may find themselves further marginalised.

    Moreover, even where teaching-only models are adopted, there is already private concern that degrees taught without regular input from research-active staff risk being perceived as inferior, despite charging similar fees. Pushing these providers towards a “teaching-only” role risks repeating a mistake we thought we had left behind before 1992, when polytechnics undertook valuable research but were excluded from national frameworks.

    Excellence and application

    When I wrote earlier this year that so-called “research minnows” have a vital role in UK arts and humanities doctoral research, the argument was simple: diversity of institutions, methods, locations, and people strengthens research. That truth matters even more today.

    Before 1992, polytechnics undertook valuable research in health, education, design and industry partnerships, amongst other things. But they were structurally excluded from national assessment and funding. In 1989, Parliament described that exclusion as an “injustice,” now it appears it may be seen as just. Yet it’s not clear what has materially changed to form that view, beyond a desire to better fund some research.

    The 1992 reforms did not “invent” research in the ex-polytechnics. They recognised it – opening the door to participation in the Research Assessment Exercise (RAE), quality-related funding and Research Council grants. Once given visibility, excellence surfaced quickly. It did so because it had always been there.

    In the 1996 Research Assessment Exercise – only the second in which post-92s could take part – De Montfort University’s Built Environment submission was rated 4 out of 5*. That placed it firmly in the category of nationally excellent research with international recognition, a standard many established pre-92 departments did not reach in that assessment panel. Indeed, the University of Salford topped the unit of assessment with 5*, just as City did in Library Studies. In Civil Engineering, the 5s of UCL and Bristol were also matched by City.

    In Physics, Hertfordshire with a 4 equalled most Russell Group universities, as did their score in Computer Science. In the areas of Linguistics and in Russian Thames Valley (University of West London) and Portsmouth earned 5s respectively, equalling Oxford and Cambridge. In Sports Liverpool John Moores and Brighton topped the ranking alongside Loughborough with their 5s.

    And it wasn’t just the ex-polytechnics that shone in many areas; the universities formed from institutes also did. The University of Gloucester outperformed Cambridge in Town and City Planning with their 4 against a 3a. Southampton Solent received a 4 in History, equalling York.

    The RAE 1996 results are worth recalling; as new universities who had previously not had the seed funding monies of the older universities, we certainly punched above our weight.

    Since their re-designation as universities, and even before, post-92 universities have built distinctive and complementary research cultures: applied, interdisciplinary, and place-anchored. Their work is designed to move quickly from knowledge to practice – spanning health interventions to creative industries, curriculum reform to urban sustainability.

    Applied and interdisciplinary strength was evident in 1996 in the high scores (4) in areas of Allied Health, (Greenwich, Portsmouth and Sheffield Hallam), sociology (4) (City), Social Policy (4) (London South Bank and Middlesex). Art and Design was dominated by post 92s, as were Communications and Cultural Studies (with 5s for Westminster and University of East London). In Music, City (5), DMU and Huddersfield (4) saw off many pre-92s.

    This is not second-tier research. It broadens the national portfolio, connects directly to communities, and trains the professionals who sustain public services. To turn these universities into “teaching-only” providers would not only weaken their missions, it would shrink the UK’s research base at the very moment that the government wants it to grow.

    Learning history’s lessons

    Research, which as we know universities undertake at a loss, has been subsidised over the last decades through cross subsidy from international student fees and other methods. Those who have been able to charge the highest international fees have had greater resource.

    But I wonder what the UK research and economic landscape would look like now if thirty years ago national centres of excellence were created following the 1996 RAE, rather than letting much of our excellent national research wither because there was no institutional cross subsidy available? Had that been undertaken we would have stronger research now, with centres of research excellence in places where the footprint of that discipline has entirely disappeared.

    There is a temptation to concentrate funding in fewer institutions, on the assumption that excellence lives only in the familiar elite. But international evidence shows that over-concentration delivers diminishing returns, while broader distribution fosters innovation and resilience. Moreover, our focus on golden triangles, clusters and corridors of innovation, can exclude those more geographically remote areas; we might think of the University of Lincoln’s leadership of advancing artificial intelligence in defence decision-making or agri-tech, or Plymouth’s marine science expertise. Post-92 research is often conducted hand-in hand with industry; a model that is very much needed.

    If the government wants results – more innovation, stronger services, a wider skills base – it must back promising work wherever it emerges, not only in the institutions the system has historically favoured.

    The binary divide was abolished in 1992 because it limited national capacity and ignored excellence outside a privileged tier. Re-creating that exclusion under a new label would repeat the same mistake, and exclude strong place-based research.

    If Labour wants a stronger, fairer system, it must resist the lure of neat hierarchies and support the full spectrum of UK excellence: theoretical and applied, lab-based and practice-led, national and local. That is the promise of the so-called “minnows” – not a drag on ambition, but one of the surest ways to achieve it. Sometimes minnows grow into big fish!

    Fund wherever there is excellence, and let that potential grow – spread opportunity wide enough for strengths to surface, especially in institutions that widen participation and anchor regional growth. The lesson is clear: when you sideline parts of the sector, you risk cutting off strengths before they are seen.

    Source link

  • The R&D buckets are here to stay – what matters now is how they’re used

    The R&D buckets are here to stay – what matters now is how they’re used

    The Budget and the introduction of DSIT’s new bucket framework mark a shift in how government wants to think and talk about research and innovation. With growth now central to the government’s agenda, it is a clear attempt to answer Treasury’s perennial question: what does the public get for its money?

    At the centre of this shift sits the idea of R&D “buckets”: a four-part categorisation of public R&D funding into curiosity-driven research, government priorities, innovation support and cross-cutting infrastructure.

    The logic behind the buckets is easy to understand. The UK system is complex, with budget lines stretching across a maze of research councils, departments, institutes, academies and government labs. Even seasoned insiders need a cup of coffee before attempting to decipher the charts on one of UKRI’s much-valued budget explainers.

    From the Treasury’s perspective, the lack of clarity is a barrier to the value of government investment. DSIT’s response is the bucket model: a clearer way of presenting public investment that moves the conversation away from budget lines and towards outcomes that matter to citizens. If this helps build broader support for R&D across departments and with the public, as CaSE’s latest research suggests is needed, it could be hugely valuable.

    The outcomes challenge

    One consequence of an outcomes-driven model, however, is that different types of research will find it easier or harder to demonstrate their value. Basic and curiosity-driven research can be difficult to evidence through simple KPIs or narrow ROI measures.

    In contrast, some forms of applied R&D lend themselves more easily to straightforward metrics. The Higher Education Innovation Fund (HEIF) is a good example. It can demonstrate a return on investment of £14.80 to £1 in ways that are simple to communicate and easy for officials to interpret. In a system that places a premium on measurable outcomes, this kind of clarity is powerful.

    If outcomes become the dominant organising logic, there is a risk that bucket one, which covers curiosity-driven research, could appear on paper to be the least “investable” – especially under a future minister who is less supportive of blue-skies research. The danger is not deliberate neglect, but an unintended shift in perception, whereby discovery research is viewed as separate from, rather than essential to, mission-led or innovation-focused work.

    The challenge becomes even clearer when we look at quality-related research funding (QR). Few funding mechanisms are as versatile or as important to the health of the research ecosystem. QR supports discovery research, helps universities leverage private investment, underpins mission- and place-based activity, and fills the gaps left by research council and charity grants. It is the flexible connective tissue that keeps the system functioning.

    Trying to code QR neatly into a single bucket, as bucket one, doesn’t reflect reality. It may make the diagrams tidier, but it also risks narrowing Whitehall’s understanding of how QR actually works. Worse, it could make QR more vulnerable at fiscal events if bucket one is cast as the “future problem” bucket, the category that can be trimmed without immediately visible consequences.

    The trap of over-simplification

    That brings us to a wider point about the buckets themselves. The intention with buckets is to draw a much more explicit line between priorities, investment and impact. This is a reasonable goal. But the risk is that it invites interpretations that are too neat. Most research does not sit cleanly in any one category. The system is interdependent, porous and overlapping. Innovation depends on discovery research. Regional growth depends on long-term capability. And capability only exists if the UK continues to invest in talent, infrastructure and basic research.

    Rather than accepting a model that implies hard boundaries, it may be more helpful to embrace, and actively communicate, this interdependence. A Venn diagram might be a more honest reflection than three or four boxes with solid walls.

    The aim is not to relabel the buckets, but to strengthen the narrative around how the types of research we fund reinforce each other, rather than competing for space in a zero-sum system. This kind of framing could also help government understand why certain funding streams look costly on paper, but yield value across a wide range of outcomes over time.

    One argument is that by identifying curiosity-driven research as a distinct bucket, it will be harder for future governments to cut it without doing so publicly. There is some truth in this. Transparency can raise the political cost of reducing support for basic research. But the counterargument is also important. Once bucket one becomes a visible and discrete line of spend, it could also become more vulnerable during fiscal consolidations. Ministers looking to free up resources for missions or innovation-focused interventions may see it as an easier place to make adjustments, especially if the definition of “impact” narrows over time.

    Shovel ready

    This is why the narrative around the buckets matters as much as the buckets themselves. If they are understood as three separate spaces competing for limited resources, the system loses coherence. Discovery becomes something distant from growth, rather than the engine that drives it. Missions appear disconnected from the long-term capability required to achieve them. Innovation emerges as a standalone activity rather than as part of a pipeline that begins with public investment in fundamental science.

    The bucket framework is not going away. It will shape how government talks about R&D for years to come. This makes the next phase critical: there is an opportunity now to influence how the buckets are interpreted, how they are used in practice and how the narrative around them is constructed.

    If treated as rigid boundaries, the buckets risk weakening the case for long-term investment in capability. But if used as a way of telling a more coherent story about the interdependence of discovery, missions and innovation, they could help build stronger cross-government support for R&D. The challenge is to make sure the latter happens.

    Source link

  • A topic modelling analysis of higher education research published between 2000 and 2021

    A topic modelling analysis of higher education research published between 2000 and 2021

    by Yusuf Oldac and Francisco Olivas

    We recently embarked upon a project to explore the development of higher education research topics over the last decades. The results were published in Review of Education. Our aim was to thematically map the field of research on higher education and to analyse how the field has evolved over time between 2000 and 2021. This blog post summarises our findings and reflects on the implications for HE research.

    HE research continues to grow. HE researchers are located in globally diverse geographical locations and publish on diversifying topics. Studies focusing on the development of HE with a global-level analysis are increasingly emerging. However, most of these studies are limited to scientometric network analyses that do not include a content-related focus. In addition, they are deductive, indicating that they tried to fit their new findings into existing categories. Recently, Daenekindt and Huisman (2020) were able to capture the scholarly literature on higher education through an analysis of latent themes by utilising topic modelling. This approach got attention in the literature, and the study’s contribution was highlighted in an earlier SRHE blog post. We also found their study useful and built on it in our novel analysis. However, their analysis focused only on generating topics from a wide range of higher education journals and did not identify explanatory factors, such as change over the years or the location of publication. After identifying this gap, we worked towards moving one step further.

    A central contribution of our study is the inclusion of a set of research content explanatory factors, namely: time, region, funding, collaboration type, and journals, to investigate the topics of HE research. In methodological terms, our study moves ahead of the description of the topic prevalence to the explanation of the prevalence utilizing structural topic modelling (Roberts et al, 2013).

    Structural topic modelling is a machine learning technique that examines the content of provided text to learn patterns in word usage without human supervision in a replicable and transparent way (Mohr & Bogdanov, 2013). This powerful technique expands the methodological repertoire of higher education research. On one hand, computational methods make it possible to extract meaning from large datasets; on the other, they allow the prediction of emerging topics by integrating the strengths of both quantitative and qualitative approaches. Nevertheless, many scholars in HE remain reluctant to engage with such methods, reflecting a degree of methodological conservatism or tunnel vision (see Huisman and Daenekindt’s SRHE blog post).

    In this blog post, our intention is not to go deep into the minute details of this methodological technique, but to share a glimpse of our main findings through the use of such a technique. With the corpus of all papers published between 2000 and 2021 in the top six generalist journals of higher education, as listed by Cantwell et al (2022) and Kwiek (2021) both, we analysed a dataset of 6,562 papers. As a result, we identified 15 emergent research topics and several major patterns that highlight the thematic changes over the last decades. Below, we share some of our findings, accompanied by relevant visualisations.

    Glimpse at the main findings with relevant visuals

    The emergent 15 higher education topics and three visibly rising ones

    Our topic modelling analysis revealed 15 distinct topics, which are largely in line with the topics discussed in previous studies on this line (eg Teichler, 1996; Tight, 2003; Horta & Jung, 2014). However, there are added nuances in our analysis. For example, the most prevalent topics are policy and teaching/learning, which are widely acknowledged in the field, but new themes have emerged and strengthened over time. These themes include identity politics and discrimination, access, and employability. These areas, conceptually linked to social justice, have become central to higher education research, especially in US-based journals but not limited to them. The visual below demonstrates the changes over the years for all 15 topics.

    • The Influence of funding on higher education research topics

    Research funding plays a crucial role in shaping certain topics, particularly gender inequality, access, and doctoral education. Studies that received funding exhibited a higher prevalence of these socially significant topics, underscoring the importance of targeted funding to support research with social impact. The data visualisation below summarises the influence of reported funding for each topic. The novelty of this pattern needs to be highlighted because we have not come across a previous study looking into the influence of funding existence on research topics in the higher education field.

    • The impact of collaboration on higher education research topics

    Collaborative publications are more prevalent in topics such as teaching and learning, and diversity and social relations. By contrast, theoretical discussions, identity politics, policy, employability, and institutional management are more common in solo-authored papers. This pattern aligns with the nature of these topics and the data requirements for research. Please see the visualised data below.

    We highlight that although the relationship between collaboration and citation impact or researcher productivity is well studied, we are not aware of any evidence of the effect of collaboration patterns on topic prevalence, particularly in studies focusing on higher education. So, this finding is a novel contribution to higher education research.

    • Higher education journals’ topic preferences

    Although the six leading journals claim to be generalist, our analysis shows they have differing publication preferences. For example, Higher Education focuses on policy and university governance, while Higher Education Research and Development stands out for teaching/learning and indigenous knowledge. Journal of Higher Education and Review of Higher Education, two US-based journals, have the highest prevalence of identity politics and discrimination topics. Last, Studies in Higher Education has a significantly higher prevalence in teaching and learning, theoretical discussions, doctoral education, and emotions, burnout and coping than most of the journals.

    • Regional differences in higher education research topics

    Topic focus varies significantly by the region of the first author. First, studies from Asia exhibit the highest prevalence of academic work and institutional management. Studies from Africa show a higher prevalence of identity politics and discrimination. Moreover, studies published by first authors from Eastern European countries stand out with the higher prevalence of employability. Lastly, the policy topic has a high prevalence across all regions. However, studies with first authors from Asia, Eastern Europe, Africa, and Latin America and the Caribbean showed a higher prevalence of policy research in higher education than those from North America and Western Europe. By contrast, indigenous knowledge is most prominent in Western Europe (including Australia and New Zealand). The figure below demonstrates these in visual format.

    Concluding remarks

    Higher education research has grown and diversified dramatically over the past two decades. The field is now established globally, with an ever-expanding array of topics and contributors. In this blog post, we shared the results of our analysis in relation to the influence of targeted funding, collaborative practices, regional differences, and journal preferences on higher education research topics. We have also indicated that certain topics have risen in prevalence in the last two decades. More patterns are included in the main research study published in Review of Education.

    It is important to note that we could only include the higher education papers published up to 2021, the latest available data year when we started the analyses. The impact of generative artificial intelligence and recent major shifts in the global geopolitics, including the new DEI policies in the US and overall securitisation of science tendencies, may not be reflected fully in this dataset. These themes are very recent, and future studies, including replications with similar approaches, may help provide newly emerging patterns.

    Dr Yusuf Oldac is an Assistant Professor in the Department of Education Policy and Leadership at The Education University of Hong Kong. He holds a PhD degree from the University of Oxford, where he received a full scholarship. Dr Oldac’s research spans international and comparative higher education, with a current focus on global science and knowledge production in university settings.

    Dr Francisco Olivas obtained his PhD in Sociology from The Chinese University of Hong Kong. He joined Lingnan University in August 2021. His research lies in the intersections between cultural sociology, social stratification, and subjective well-being, using quantitative and computational methods.

    Author: SRHE News Blog

    An international learned society, concerned with supporting research and researchers into Higher Education

    Source link

  • A change in approach means research may never be the same again

    A change in approach means research may never be the same again

    At first glance Liz Kendall may look like an odd choice for Secretary of State for Science, Innovation and Technology. She has never worked in science, she has rarely mentioned science directly in any intervention in her entire parliamentary career, and this is not a role with the kind of profile which will allow easy entry in any future leadership race.

    Although covering a related brief has never been a disqualifying quality for any predecessors, her move from the Department for Work and Pensions following her failed welfare reforms felt more like a hasty exit than a tactical manoeuvre.

    Her direct predecessor, Peter Kyle, often seemed more preoccupied with turning the UK into an “AI superpower” than he did the more tedious business of how the research ecosystem is governed and how it can be manipulated to fulfil the government’s ambitions. In truth, the business of research reform is not about more grand visions, frameworks, or strategies, but the rather grubbier work of deciding where to spend a finite amount of funding on an infinite amount of programmes.

    Practice and promise

    Kendall’s interest in research has so far been on the business of making the country better. As seen in her speeches during her time on the backbenches, research is about practical things like regional economies, skills, and curing diseases. For Kendall, “what matters is what works.” And in her speech at the Innovation for Growth Summit a management theory of research reform about balancing the speculative with the practical emerged.

    The premise of her speech was that the growth of the UK economy is reliant on making the most of the UK’s R&D strengths. To get the most out of the UK’s R&D strengths Kendall believes the government can neither be too directive and must allow curiosity-driven research to prosper. It should also not be too permissive, funding must be directed toward government priorities particularly when it comes to translation and application.

    It is a middle ground approach to research management for a third-way politician. In line with the three bucket model (outlined here by current Strathclyde professor of practice, research and innovation policy, and former DSIT and Research England person Ben Johnson) Kendall has clarified the government’s research funding allocations. There will be £14bn for curiosity driven R&D, £8bn toward the government’s priorities, and £7bn for scale up support. £7bn has also been announced in skills and infrastructure to secure the success of each bucket of activity.

    The label problem

    The labelling of existing funds in new ways is in itself not a strategy for economic growth. Clearly, doing the same thing, with the same people, in the same ways, would lead to exactly the same outcomes with a different name. A bit like when international research became about making the UK a “science superpower” or when every ambitious research programme was a “moonshot” or relabelling every economic benefit produced through research as “levelling up”.

    The boldest ambition of Kendall’s speech is perhaps the most understated. Kendall is committed to “doing fewer things better.” In a speech delivered at the same event by UKRI’s Chief Executive, Ian Chapman, this simple sentiment may have massive consequences.

    Chapman’s view is that the UK lacks any of the natural resources advantages of its major international competitors. Instead, the UK maintains its competitiveness through the smart use of its knowledge assets even if he believes these are “undervalued and underappreciated.”

    Chapman’s UKRI will be more interventionist. He will maintain curiosity driven research but warns that UKRI will not support the activities where it has no “right to win significant market-share in that sector,” and in backing spin-outs UKRI will be “much more selective.” The future being etched out here is one where there is much greater direction by government and UKRI toward funding that aligns with the industrial strategy and its mission for economic growth while maintaining a broad research base through curiosity driven research. Clearly, funding fewer programmes more generously means that some areas of research will receive less government funding.

    The government’s approach to research is coalescing around its approach to governing more broadly. Like the industrial strategy the government is not picking winners as such but creating the conditions through which some desirable policy outcomes like economic growth have a better chance of emerging. It’s a mix of directing funding toward areas where the UK may secure an advantage like the doubling of R&D investment in critical technologies, addressing market failures through measures like the £4.5m for Women in Innovation Awards, and regulating to shape the market with the emphasis of economic growth and sustainability in UKRI’s new framework document.

    Football’s coming home

    In her speech Kendall likened the selective funding approaches to the selective sports funding of the Olympics. Alighting on a different sporting metaphor Chapman recalled the time a non-specific European team he supports (almost definitely Liverpool) came back from 3-0 down to win the European Cup as a reminder that through collective support researchers can achieve great things.

    Perhaps, UK research has been more like the England men’s football team than it has the current Premier League champions. The right pieces in the wrong places with little sense of how the talent of individuals contribute to the success of the whole. In committing to funding fewer programmes better the government wants all its stars on the pitch in top condition. The challenge is that those who go from some funding to none are likely to feel their contributions to the team’s success have been overlooked

    Source link

  • London’s business leaders overwhelmingly support the UK’s international graduates

    London’s business leaders overwhelmingly support the UK’s international graduates

    As the UK prepares for the Graduate Route to be shortened from two years to 18 months, London’s business leaders have had their say on international graduates in the workforce, with 90% showing support.

    The results of London Higher‘s recent survey of 1,000 business leaders found that international talent is highly valued across London businesses – 62% of respondents view international talent as essential and a further 28% say it is important. Only 10% say foreign talent is not very important or not at all important.

    “Global graduates give London its competitive edge. Every sector of our economy benefits from the talent and energy they bring. This research shows that they don’t take opportunities away – they help create them,” said Liz Hutchinson, chief executive of London Higher – the membership organisation that promotes and acts as an advocate for higher education in the city.

    The majority of those surveyed believe that international talent plugs skills gaps (93%), drives innovation (89%) and supports London’s global competitiveness, while only a small minority of business leaders felt it reduced scope for domestic talent and innovation.

    Some 93% of respondents say that international talent helps address skills gaps in their industry, with only 4% saying that international workers reduce opportunities for UK talent.

    “By helping businesses expand, [global graduates] generate more jobs and opportunities for everyone. As the government focuses on building domestic skills through its post-16 white paper, international graduates complement these efforts by addressing immediate skills gaps in critical growth sectors,” added Hutchinson.

    As the government focuses on building domestic skills through its post-16 white paper, international graduates complement these efforts by addressing immediate skills gaps in critical growth sectors
    Liz Hutchinson, London Higher

    Elsewhere, 91% of those surveyed view international workers as essential or helpful for the city’s competitiveness against global cities such as New York, Singapore or Paris, with only 7% saying that their relevance is limited or non-existent.

    The survey shows that support for international talent is strongest in larger, growth sector companies – and in those that think they are outperforming their competitors.

    The survey comes as anti-immigration rhetoric in the UK intensifies and the government pushes ahead with stricter immigration rules.

    As domestic politics play out in headlines overseas and concerns grow around the UK’s stance as a welcoming destination for international talent, Harry Coath, head of the talent and skills programme at London’s growth agency, London & Partners, said he sees an opportunity for London to position itself as a city that truly embraces diversity – a factor he noted is central to why so many businesses choose to be here.

    Speaking at London Higher’s conference this week, alongside Coath, Ruth Arnold, executive director of external affairs at Study Group, said the latest research is arguably the most important report London Higher has ever produced, taking into consideration this political context and the importance of employability and post-study work to today’s international students.

    The UK government’s decision to cut the Graduate Route visa from two years to 18 months was first announced in May in the UK government’s white paper on immigration, and the change is set to to take effect from January 2027.

    The survey showed that business leaders think international students should be able to access work visas – 59% want to see easier access for international students to stay in the country 28% feel the current system works, while only 10% are vying for tighter controls.

    John Dickie, CEO of BusinessLDN, commented on the report’s findings, highlighting the importance that the UK “does all it can to remain attractive to highly skilled individuals from across the globe, particularly at a time when some of our rivals are closing their doors to international students”.

    Dickie noted the government’s proposed levy on international student fees, and urged ministers to scrap these “misguided plans” that he said would “hit growth, exacerbate the sector’s financial challenges and undermine [the UK’s] soft power”.

    Source link

  • Transparency, collaboration, and culture, are key to winning public trust in research

    Transparency, collaboration, and culture, are key to winning public trust in research

    The higher education sector is focussing too much on inward-facing debates on research culture and are missing out on a major opportunity to expose our culture to the public as a way to truly connect research with society.

    REF can underpin this outward turn, providing mechanisms not only for incentivising good culture, but for opening up conversations about who we are and how we work to contribute to society.

    This outward turn matters. Research and Development (R&D) delivers enormous economic and societal value, yet universities struggle to earn public trust or support for what they do. Recent nation-wide public opinion research by Campaign for Science and Engineering (CaSE) has shown that while 88 per cent of people say it is important for the Government to invest in R&D, just 18 per cent can immediately think of lots of ways R&D benefits them and their family. When talking about R&D in public focus groups, universities were rarely front of mind and are primarily seen as education institutions where students or lecturers might do R&D as an ancillary activity.

    If the university sector is to sustain legitimacy – and by extension, the political and financial foundations of UK research – we must find new ways to make our work visible, relatable, and trusted. Focusing on the culture that shapes how research is done may be the most powerful way to do this.

    Why culture matters

    Public opinion is not background noise. Public awareness, appetite and trust all shape political choices about funding, regulation, and the role of universities in national life. While CaSE’s work shows that 72 per cent of people trust universities to be honest about how much the UK government should invest in R&D, the lack of awareness about what universities do and how they do it leaves legitimacy fragile.

    This fragility is starkly illustrated by recent polling from More in Common: when asked which government budgets they would most like to see cut, the public didn’t want funding cuts for R&D, yet placed universities third on the list for budgets that they would be happy to be cut (alongside foreign aid and funding for the arts).

    Current approaches to improving public opinions about research in our sector have had limited success. The sector’s instinct has been to showcase outputs – discoveries, patents, and impact case studies – to boost public awareness and build support for research in universities. But CaSE polling evidence suggests that this approach isn’t cutting through: 74 per cent of the public said they knew nothing or hardly anything about R&D in their area. This lack of connection does not indicate a lack of interest: a similar proportion (70 per cent) would like to hear more about local R&D.

    Transparency

    Evidence from other sectors shows that opening up processes builds trust. In healthcare, for example, the NHS has found that when patients are meaningfully involved in decisions about their care and how services are designed, trust and satisfaction increase – not just because of outcomes, but because people can see and influence how decisions are made.

    Research from business and engineering contexts shows that people are more likely to trust companies that are open about how they operate, not just what they deliver. Together, these lessons reinforce that we should not rely on showcasing outputs alone: legitimacy comes from making visible the processes, people and cultures that underpin research.

    Universities don’t just generate knowledge; they develop the individuals who carry skills and values into the wider economy. Researchers, technicians, professional services staff and others who enable research in higher education bring curiosity, collaboration and critical thinking into every sector, both through direct collaboration and when they move beyond academia. These skills fuel innovation and problem-solving across the economy and public services, but they can only develop and thrive in supportive, inclusive research cultures. Without attention to culture, the talent pipeline that government and industry rely on is put at risk.

    Research culture makes these processes and people visible. Culture is about how research is done: the integrity of methods, the openness of data, the inclusivity of teams, the collaborations – including with the public – that make discoveries possible. These are the very things the public are keen to understand better. By opening up the black box of research and showing the culture that underpins it, we can make university research more relatable, trustworthy, and visible in everyday life.

    The role of REF in shifting the conversation

    The expansion of the old Environment element of REF to encompass broader aspects of research culture offers an opportunity to help shift from an inward to a more outward looking narrative and public conversation. The visibility and accountability that REF submissions require matters beyond academia: it gives the sector a platform to showcase the values and processes that underpin research. In doing so, REF can help our sector build trust and legitimacy by making research culture part of the national conversation about R&D.

    Openness, integrity, inclusivity, and collaboration – core components of research culture – are values which the public already recognise and expect. By framing research culture as part of the story we tell – explaining not just what our universities produce but how they produce it – we can build a stronger connection with the public. Culture is the bridge between the abstract notion of investing in R&D and a lived understanding of what universities actually do in society.

    Public support for research is strong, but support for universities is increasingly fragile. Whatever the REF looks like when we unpause, we need to avoid retreating to ‘business as usual’ and closing down this opportunity to open up a more meaningful conversation about the role universities play in UK R&D and in the progress of society.

    Source link

  • REF should be about technical professionals too

    REF should be about technical professionals too

    Every great discovery begins long before a headline or journal article.

    Behind every experiment, dataset, and lecture lies a community of highly skilled technical professionals, technologists, facility managers, and infrastructure specialists. They design and maintain the systems that make research work, train others to use complex equipment, and ensure data integrity and reproducibility. Yet their contribution has too often been invisible in how we assess and reward research excellence.

    The pause in the Research Excellence Framework (REF) is more than a scheduling adjustment, it’s a moment to reflect on what we value within the UK research and innovation sector.

    If we are serious about supporting excellence, we must recognise all those who make it possible, not just those whose names appear on papers or grants, but the whole team, including technical professionals whose expertise enables every discovery.

    Making people visible in research culture

    Over the past decade, there has been growing recognition that research culture, including visibility, recognition, and support for technical professionals is central to delivering world-class outcomes. Initiatives such as the Technician Commitment, now backed by more than 140 universities and research institutes, have led the way in embedding good practice around technical professional careers, progression, and recognition.

    Alongside this, the UK Institute for Technical Skills and Strategy (UK ITSS) continues to advocate for technical professionals nationally to ensure they are visible and their inputs are recognised within the UK’s research and innovation system. These developments have helped reshape how universities think about people, culture, and environment, creating the conditions where all contributors to research and innovation can thrive.

    A national capability – not a hidden workforce

    This shift is not just about fairness or inclusion, it’s about the UK’s ability to deliver on its strategic ambitions. Technical professionals are critical to achieving the goals set out in the UK Government’s Modern Industrial Strategy and to the success of frontier technologies such as artificial intelligence, quantum, engineering biology, advanced connectivity, and semiconductors. These frontier sectors rely on technical specialists to design, operate, and maintain the underpinning infrastructure on which research and innovation depend.

    Without a stable, well-supported technical professional workforce, the UK risks losing the very capacity it needs to remain globally competitive. Attracting, training, and retaining this talent necessitates that technical roles are visible and recognised – not treated as peripheral to research, but as essential to it.

    Why REF matters

    This is where the People, Culture and Environment (PCE) element of the REF becomes critical. REF has always shaped behaviour across the sector. Its weighting signals what the UK values in research and innovation. Some have argued that PCE should be reduced (or indeed removed) to simplify the REF process, ease administrative burden, or avoid what they see as subjectivity in the assessment of research culture. Others have suggested a greater emphasis on environment would shift focus away from research excellence, or that culture work is too challenging to consistently assess across institutions. But these arguments overlook something fundamental, that the quality of our research, the excellence we deliver as a sector, is intrinsically tied to the conditions in which it is produced. As such, reducing the weighting of PCE would send a contradictory message, that culture, collaboration, and support for people are secondary to outputs rather than two sides of the same coin.

    The Stern Review and the Future Research Assessment Programme both recognised the need for a greater focus on research and innovation environments. PCE is not an optional extra, it is fundamental to research integrity, innovation, and excellence. A justifiably robust weighting reflects this reality and gives institutions the incentive to continue investing in healthy, supportive, and inclusive environments.

    Universities have already made significant progress on this by developing new data systems, engaging staff, and benchmarking culture change. There is clear evidence that the proposed PCE focus has driven positive shifts in institutional behaviour. To step away from this now would risk undoing that progress and undermine the growing recognition of technical professionals as central to research and innovation success.

    Including technical professionals explicitly within REF delivers real benefits for both technical professionals and their institutions, and ultimately strengthens research excellence. For technicians, recognition within the PCE element encourages universities to create the kind of environments in which they can thrive – cultures that value their expertise, provide clearer career pathways, invest in skills, and ensure they have the support and infrastructure to contribute fully to research. Crucially, REF 2029 also enables institutions to submit outputs led by technical colleagues, recognising their role in developing methods, tools, data, and innovations that directly advance knowledge.

    For universities, embedding this broader community within PCE strengthens the systems REF is designed to assess. It drives safer, more efficient and sustainable facilities, improves data quality and integrity, and fosters collaborative, well-supported research environments. By incentivising investment in skilled, stable, and, empowered technical teams, the inclusion of technicians enhances the reliability, reproducibility, and innovation potential of research – ultimately raising the standard of research excellence across the institution.

    From hidden to central

    REF has the power not only to measure excellence, but to shape it. By maintaining a strong focus on people and culture, it can encourage institutions to build the frameworks, leadership roles, and recognition mechanisms that enable all contributors, whether technical, academic, or professional, to contribute and excel.

    In doing so, REF can help normalise good practice, embed openness and transparency, and ensure that the environments underpinning discovery are as innovative and excellence driven as the research itself.

    Technical professionals have always been at the heart of UK research. Their skill, creativity, and dedication underpin every discovery, innovation, and breakthrough. What’s changing now is visibility. Their contribution is increasingly recognised and celebrated as foundational to research excellence and national capability.

    As REF evolves, it must continue to reward the environments that nurture, develop, and sustain technical expertise. In doing so, it can help ensure that technical professionals are not just acknowledged but firmly established at the centre of the UK’s research and innovation system – visible, recognised, and vital (as ever) to its future success.

    Source link

  • Making creative practice research visible

    Making creative practice research visible

    I still remember walking into my first Association of Media Practice Educators conference, sometime around the turn of the millennium.

    I was a very junior academic, wide-eyed and slightly overwhelmed. Until that point, I’d assumed research lived only in books and journals.

    My degree had trained me to write about creative work, not to make it.

    That event was a revelation. Here were filmmakers, designers, artists, and teachers talking about the doing as research – not as illustration or reflection, but as knowledge in its own right. There was a sense of solidarity, even mischief, in the air. We were building something together: a new language for what universities could call research.

    When AMPE eventually merged with MeCCSA – the Media, Communication and Cultural Studies Association – some of us worried that the fragile culture of practice would be swallowed by traditional academic habits. I remember standing in a crowded coffee queue at that first joint conference, wondering aloud whether practice would survive.

    It did. But it’s taken twenty-five years to get here.

    From justification to circulation

    In the early days, the fight was about legitimacy. We were learning to write short contextual statements that translated installations, performances, and films into assessable outputs. The real gatekeeper was always the Research Excellence Framework. Creative practice researchers learned to speak REF – to evidence, contextualise, and theorise the mess of creative making.

    Now that argument is largely won. REF 2021 explicitly recognised practice research. Most universities have templates, repositories, and internal mentors to support it. There are still a few sceptics muttering about rigour, but they’re the exception, not the rule.

    If creative practice makes knowledge, the challenge today is not justification. It’s circulation.

    Creative practice is inherently cross-disciplinary. It doesn’t sit neatly in the subject silos that shape our academic infrastructure. Each university has built its own version of a practice research framework – its own forms, repositories, and metadata – but the systems don’t talk to one another. Knowledge that begins in the studio too often ends up locked inside an institutional database, invisible to the rest of the world.

    A decade of blueprints

    Over the past few years, a string of national projects has tried to fix that.

    PRAG-UK, funded by Research England in 2021, mapped the field and called for a national repository, metadata standards, and a permanent advisory body. It was an ambitious vision that recognised practice research as mature and ready to stand alongside other forms of knowledge production.

    Next came Practice Research Voices and SPARKLE in 2023 – both AHRC-funded, both community-driven. PR Voices, led by the University of Westminster, tested a prototype repository built on the Cayuse platform. It introduced the idea of the practice research portfolio – a living collection that links artefacts, documentation, and narrative. SPARKLE, based at Leeds with the British Library and EDINA, developed a technical roadmap for a national infrastructure, outlining how such a system might actually work.

    And now we have ENACT – the Practice Research Data Service, funded through UKRI’s Digital Research Infrastructure programme and again led by Westminster. ENACT’s job is to turn all those reports into something real: a national, interoperable, open data service that makes creative research findable, accessible, and reusable. For the first time, practice research is being treated as part of the UK’s research infrastructure, not a quirky sideshow to it.

    A glimpse of community

    In June 2025, Manchester Metropolitan University hosted The Future of Practice Research. For once, everyone was in the same room – the PRAG-UK authors, the SPARKLE developers, the ENACT team, funders, librarians, and plenty of curious researchers. We swapped notes, compared schemas, and argued cheerfully about persistent identifiers.

    It felt significant – a moment of coherence after years of fragmentation. For a day, it felt like we might actually build a network that could connect all these efforts.

    A few weeks later, I found myself giving a talk for Loughborough University’s Capturing Creativity webinar series. Preparing for that presentation meant gathering up a decade of my own work on creative practice research – the workshops I’ve designed, the projects I’ve evaluated, the writing I’ve done to help colleagues articulate their practice as research. In pulling all that together, I realised how cyclical this story is.

    Back at that first AMPE conference, we were building a community from scratch. Today, we’re trying to build one again – only this time across digital platforms, data standards, and research infrastructure.

    The policy challenge

    If you work in research management, this is your problem too. Practice research now sits comfortably inside the REF, but not inside the systems that sustain the rest of academia. We have no shared metadata standards, no persistent identifiers for creative outputs, and no national repository.

    Every university has built its own mini-ecosystem. None of them connect.

    The sector needs collective leadership – from UKRI, the AHRC, Jisc, and Universities UK – to treat creative practice research as shared infrastructure. That means long-term funding, coordination across institutions, and skills investment for researchers, librarians, and digital curators.

    Without that, we’ll keep reinventing the same wheel in different corners of the country.

    Coming full circle

    Pulling together that presentation for Capturing Creativity reminded me how far we’ve come – and how much remains undone. We no longer need to justify creative practice as research. But we still need to build the systems, the culture, and the networks that let it circulate.

    Because practice research isn’t just another output type. It’s the imagination of the academy made visible.

    And if the academy can’t imagine an infrastructure worthy of its own imagination, then we really haven’t learned much from the last twenty-five years.

    Source link