Universities Australia has signed a deal with China that will encourage research collaboration and student exchanges between the two countries.
Please login below to view content or subscribe now.
Membership Login

Universities Australia has signed a deal with China that will encourage research collaboration and student exchanges between the two countries.
Please login below to view content or subscribe now.

Universities need workers with comprehensive analytical and strategic skills, but funding cuts and progression barriers have caused retention issues, leading to early-career researchers leaving universities in droves.
Please login below to view content or subscribe now.

The Post-16 education and skills white paper might not have a lot of specifics in it but it does mostly make sense.
The government’s diagnosis is that the homogeneity of sector outputs is a barrier to growth. Their view, emerging from the industrial strategy, is that it is an inefficient use of public resources to have organisations doing the same things in the same places. The ideal is specialisation where universities concentrate on the things they are best at.
There are different kinds of nudges to achieve this goal. One is the suggestion that the REF could more closely align to the government missions. The detail is not there but it is possible to see how impact could be made to be about economic growth or funding could be shifted more toward applied work. There is a suggestion that research funding should consider the potential of places (maybe that could lead to some regional multipliers who knows). And there are already announced steps around the reform on HEIF and new support for spin-outs.
All of these things might help but they will not be enough to fundamentally change the research ecosystem. If the incentives stay broadly the same researchers and universities will continue to do broadly the same things irrespective of how much the government wants more research aimed at growing the economy.
The potentially biggest reform has the smallest amount of detail. The paper states
We will incentivise this specialisation and collaboration through research funding reform. By incentivising a more strategic distribution of research activity across the sector, we can ensure that funding is used effectively and that institutions are empowered to build deep expertise in areas where they can lead. This may mean a more focused volume of research, delivered with higher-quality, better cost recovery, and stronger alignment to short- and long-term national priorities. Given the close link between research and teaching, we expect these changes to support more specialised and high quality teaching provision as well.
The implication here is that if research funding is allocated differently then providers will choose to specialise their teaching because research and teaching are linked. Before we get to whether there is a link between research funding and teaching (spoiler there is not) it is worth unpacking two other implications here.
The first is that the “strategic distribution” element will have entirely different impacts depending on what the strategy is and what the distribution mechanism is. The paper states that there could, broadly, be three kinds of providers. Teaching only, teaching with applied research, and research institutions (who presumably also do teaching.) The strategy is to allow providers to focus on their strengths but the problem is it is entirely unclear which strengths or how they will be measured. For example, there are some researchers that are doing research which is economically impactful but perhaps not the most academically ground breaking. Presumably this is not the activity which the government would wish to deprioritise but could be if measured by current metrics. It also doesn’t explain how providers with pockets of research excellence within an overall weaker research profile could maintain their research infrastructure.
The white paper suggests that the sector should focus on fewer but better funded research projects. This makes sense if the aim is to improve the cost recovery on individual research projects but improving the unit of resource through concentrating the overall allocation won’t necessarily improve financial sustainability of research generally. A strategic decision to align research funding more with the industrial strategy would leave some providers exposed. A strategic decision to invest in research potential not research performance would harm others. A focus on regions, or London, or excellence wherever it may be, would have a different impact. The distribution mechanism is a second order question to the overall strategy which has not yet dealt with some difficult trade offs
On its own terms it also seems research funding is not a good indicator of teaching specialism.
When the White Paper suggests that the government can “incentivise specialisation and collaboration through research funding reform”, it is worth asking what – if any – links there currently are between research funding and teaching provision.
There’s two ways we can look at this. The first version looks at current research income from the UK government to each provider(either directly, or via UKRI) by cost centre – and compares that to the students (FTE) associated with that cost centre within a provider.
We’re at a low resolution – this split of students isn’t filterable by level or mode of study, and finances are sometimes corrected after the initial publication (we’ve looked at 2021-22 to remove this issue). You can look at each cost centre to see if there is a relationship between the volume of government research funding and student FTE – and in all honesty there isn’t much of one in most cases.
If you think about it, that’s kind of a surprise – surely a larger department would have more of both? – but there are some providers who are clearly known for having high quality research as opposed to large numbers of students.
So to build quality into our thinking we turn to the REF results (we know that there is generally a good correlation between REF outcomes and research income).
Our problem here is that REF results are presented by unit of assessment – a subject grouping that maps cleanly neither to cost centres or to the CAH hierarchy used more commonly in student data (for more on the wild world of subject classifications, DK has you covered). This is by design of course – an academic with training in biosciences may well live in the biosciences department and the biosciences cost centre, but there is nothing to stop them researching how biosciences is taught (outputs of which might be returned to the Education cost centre).
What has been done here is a custom mapping at CAH3 level between subjects students are studying and REF2021 submissions – the axis are student headcount (you can filter by mode and level, and choose whichever academic year you fancy looking at) against the FTE of staff submitted to REF2021 – with a darker blue blob showing a greater proportion of the submission rated as 4* in the REF (there’s a filter at the bottom if you want to look at just high performing departments).
Again, correlations are very hard to come by (if you want you can look at a chart for a single provider across all units of assessment). It’s almost as if research doesn’t bring in money that can cross-subsidise teaching, which will come as no surprise to anyone who has ever worked in higher education.
The government’s vision for higher education is clear. Universities should specialise and universities that focus on economic growth should be rewarded. The mechanisms to achieve it feel, frankly, like a mix of things that have already been announced and new measures that are divorced from the reality of the financial incentives universities work under.
The white paper has assiduously ducked laying out some of the trade-offs and losers in the new system. Without this the government cannot set priorities and if it does not move some of the underlying incentives on student funding, regional funding distribution, greater devolution, supply-side spending like Freeports, staff reward and recognition, student number allocations, or the myriad of things that make up the basis of the university funding settlement, it has little hope of achieving its goals in specialisation or growth.

The pause to reflect on REF 2029 has reignited debate about what the exercise should encompass – and in particular whether and how research culture should be assessed.
Open research is a core component of a strong research culture. Now is the time to take stock of what has been achieved, and to consider how REF can promote the next stage of culture change around open research.
Open research can mean many things in different fields, as the UNESCO Recommendation on Open Science makes clear. Wherever it is practiced, open research shifts focus away from outputs and onto processes, with the understanding that if we make the processes around research excellent, then excellent outcomes will follow
Being open allows quality assurance processes to work, and therefore research to be trustworthy. Although not all aspects of research can be open (sensitive personal data, for example), an approach to learning about the world that is as open as possible differentiates academic research from almost all other routes to knowledge. Open research is not just a set of practices – it’s part of the culture we build around integrity, collaboration and accountability.
But doing research openly takes time, expertise, support and resources. As a result, researchers can feel vulnerable. They can worry that taking the time to focus on high-quality research processes might delay publication and risk them being scooped, or that including costs for open research in funding bids might make them less likely to be funded; they worry about jeopardising their careers. Unless all actors in the research ecosystem engage, then some researchers and some institutions will feel that they put themselves at a disadvantage.
Open research is, therefore, a collective action problem, requiring not only policy levers but a culture shift in how research is conducted and disseminated, which is where the REF comes in.
Of all the things that influence how research is done and managed in the UK HE sector, the REF is the one that perhaps attracts most attention, despite far fewer funds being guided by its outcome than are distributed to HEIs in other ways.
One of the reasons for this attention is that REF is one of the few mechanisms to address collective action problems and drive cultural change in the sector. It does this in two ways, by setting minimum standards for a submission, and by setting some defined assessment criteria beyond those minimum standards. Both mechanisms provide incentives for submitting institutions to behave in particular ways. It is not enough for institutions to simply say that they behave in this way – by making submissions open, the REF makes institutions accountable for their claims, in the same way as researchers are made accountable when they share their data, code and materials.
So, then, how has this worked in practice?
A review of the main panel reports from REF 2021 shows that evidence of open research was visible across all four main panels, but unevenly distributed. Panel A highlighted internationally significant leadership in Public Health, Health Services and Primary Care (UoA 2) and Psychology, Psychiatry and Neuroscience (UoA 4), while Panel B noted embedded practices in Chemistry (UoA 8) and urged Computer Science and Informatics (UoA 11) to make a wider shift towards open science through sharing data, software, and protocols. Panel C pointed to strong examples in Geography and Environment Studies (UoA 14), and in Archaeology (UoA 15), where collaboration, transparency, and reproducibility were particularly evident. By contrast, Panel D – and parts of Panel C – showed how definitions of open research can be more complex, because what constitutes ‘open research’ is perhaps much more nuanced and varied in these disciplines, and these disciplines did not always demonstrate how they were engaging with institutional priorities on open research and supporting a culture of research integrity. Overall, then, open research did not feature in the reports on most UoAs.
It is clear that in 2021 there was progress, in part guided by the inclusion in the REF guidance of a clear indicator. However, there is still a long way to go and it is clear open research was understood and evidenced in ways that could exclude some research fields, epistemologies and transparent research practices.
With REF 2029, the new People, Culture and Environment element has created a stronger incentive to drive culture change across the sector. Institutions are embracing the move beyond compliance, making openness and transparency a core part of everyday research practice. However, alignment between this sector move, REF policy and funder action remains essential to address this collective action problem and therefore ensure that this progress is maintained.
To step back now would not only risk slowing, or even undoing, progress, but would send confused signals that openness and transparency may be optional extras rather than essentials for a trusted research system. Embedding this move is not optional: a culture of openness is essential for the sustainability of UK research and development, for the quality of research processes, and for ensuring that outputs are not just excellent, but also trustworthy in a time of mass misinformation.
Openness, transparency and accountability are key attributes of research, and hallmarks of the culture that we want to see in the sector now and in the future. Critically, coordinated sector-wide, institutional and individual actions are all needed to embed more openness into everyday research practices. This is not just about compliance – it is about a genuine culture shift in how research is conducted, shared and preserved. It is about doing the right thing in the right way. If that is accepted, then we would challenge those advocating for reducing the importance of those practices in the REF: what is your alternative, and will it command public trust?
This article was supported by contributions from:
Michel Belyk (Edge Hill University), Nik Bessis (Edge Hill University), Cyclia Bolibaugh (University of York), Will Cawthorn (University of Edinburgh), Joe Corneli (Oxford Brookes University), Thomas Evans (University of Greenwich), Eleanora Gandolfi (University of Surrey), Jim Grange (Keele University), Corinne Jola (Abertay University), Hamid Khan (Imperial College, London), Gemma Learmonth (University of Stirling), Natasha Mauthner (Newcastle University), Charlotte Pennington (Aston University), Etienne Roesch (University of Reading), Daniela Schmidt (University of Bristol), Suzanne Stewart (University of Chester), Richard Thomas (University of Leicester), Steven Vidovic (University of Southampton), Eric White (Oxford Brookes University).

Welcome to TWTQTW for June-September. Things were a little slow in July, but with back to school happening in most of the Northern Hemisphere sometime between last August and late September, the stories began pouring in.
You might think that “back to school” would deliver up lots of stories about enrolment trends, but you’d mostly be wrong. While few countries are as bad as Canada when it comes to up-to date enrolment data, it’s a rare country that can give you good enrolment information in September. What you tend to get are what I call “mood” pieces looking backwards and forwards on long-term trends: this is particularly true in places like South Korea, where short-term trends are not bad (international students are backfilling domestic losses nicely for the moment) but the long-term looks pretty awful. Taiwan, whose demographic crisis is well known, saw a decline of about 7% in new enrolments, but there were also some shock declines in various parts of the world: Portugal, Denmark, and – most surprisingly – Pakistan.
Another perennial back-to-school story has to do with tuition fees. Lots of stories here. Ghana announced a new “No Fees Stress” policy in which first-year students could get their fees refunded. No doubt it’s a policy which students will enjoy, but this policy seems awfully close in inspiration to New Zealand’s First Year Free policy which famously had no effect whatsoever on access. But, elsewhere, tuition policy seems to be moving in the other direction. In China, rising fees at top universities sparked fears of an access gap and, in Iran, the decision of Islamic Azad University (a sort-of private institution that educates about a quarter of all Iranian youth) to continue raising tuition (partly in response to annual inflation rates now over 40%) has led to widespread dissatisfaction. Finally, tuition rose sharply in Bulgaria after the Higher Education Act was amended to link fees to government spending (i.e. more government spending, more fees). After student protests, the government moved to cut tuition by 25% from its new level, but this still left tuition substantially above where it was the year before.
On the related issue of Student Aid, three countries stood out. The first was Kazakhstan, where the government increased domestic student grants increased by 61% but also announced a cut in the government’s famous study-abroad scheme which sends high-potential youth to highly-ranked foreign universities.
Perhaps the most stunning change occurred in Chile, where two existing student aid programs were replaced by a new system called the Fondo para la Educación Superior (FES), which is arguably unique in the world. The idea is to replace the existing system of student loans with a graduate tax: students who obtain funds through the FES will be required to pay a contribution of 10% of marginal income over about US$515/week for a period of twenty years. In substance, it is a lot like the Yale Tuition Postponement Plan, which has never been replicated at a national level because of the heavy burden placed on high income earners. A team from UCL in London analyzed the plan and suggested that it will be largely self-supporting – but only because high-earning graduates in professional fields will pay in far more than they receive, thus creating a question of potential self-selection out of the program.
In Colombia, Congress passed a law mandating ICETEX (the country’s student loan agency which mostly services students at private universities) to lower interest rates, offer generous loan forgiveness and adopt an income-contingent repayment system. However, almost simultaneously, the Government of Gustavo Petro actually raised student loan interest rates because it could no longer afford to subsidize them. This story has a ways to run, I think.
On to the world government cutbacks. In the Netherlands, given the fall of the Schoof government and the call for elections this month, universities might reasonably have expected to avoid trouble in a budget delivered by a caretaker government. Unfortunately, that wasn’t the case: instead, the 2026 imposed significant new cuts on the sector. In Argentina, Congress passed a law that would see higher education spending rise to 1% of GDP (roughly double the current rate). President Milei vetoed the law, but Congress overturned President Milei’s veto. In theory, that means a huge increase in university funding. But given the increasing likelihood of a new economic collapse in Argentina, it’s anyone’s guess how fulfilling this law is going to work out.
One important debate that keeps popping up in growing higher education systems is the trade-off between quality and quantity with respect to institutions: that is, to focus money on a small number of high-quality institutions or a large number of, well, mediocre ones. Back in August, the Nigerian President, under pressure from the National Assembly to open hundreds of new universities to meet growing demand, announced a seven-year moratorium on the formation of new federal universities (I will eat several articles of clothing if there are no new federal universities before 2032). Conversely, in Peru, a rambunctious Congress passed laws to create 22 new universities in the face of Presidential reluctance to spread funds too thinly.
The newson Graduate Outcomes is not very good, particularly in Asia. In South Korea, youth employment rates are lower than they have been in a quarter-century, and the unemployment rate among bachelor’s grads is now higher than for middle-school grads. This is leading many to delay graduation. The situation in Singapore is not quite as serious but is still bad enough to make undergraduates fight for spots in elite “business cubs”. In China, the government was sufficiently worried about the employment prospects of the spring 2025 graduating class that it ordered some unprecedented measures to find them jobs, but while youth employment stayed low (that is, about 14%) at the start of the summer, the rate was back up to 19% by August. Some think these high levels of unemployment are changing Chinese society for good. Over in North America, the situation is not quite as dire, but the sudden inability of computer science graduates to find jobs seems deeply unfair to a generation that was told “just learn how to code”.
Withrespect to Research Funding and Policy, the most gobsmacking news came from Switzerland where the federal government decided to slash the budget of the Swiss National Science Foundation (SNSF) by 20%. In Australia, the group handling the Government’s Strategic Examination of Research and Development released six more “issue” papers which, amongst other things, suggested forcing institutions to choose particular areas of specialization in areas of government “priority”, a suggestion which was echoed in the UK both by the new head of UK Research and Innovation and the President of Universities UK.
But, of course, in terms of the politicization of research, very little can match the United States. In July, President Trump issued an Executive Order which explicitly handed oversight of research grants at the many agencies which fund extramural research to political appointees who would vet projects to ensure that they were in line with Trump administration priorities. Then, on the 1st of October (technically not Q3, but it’s too big a story to omit), the White House floated the idea of a “compact” with universities, under which institutions would agree to a number of conditions including shutting down departments that “punish, belittle” or “spark violence against conservative ideas” in return for various types of funding. Descriptions of the compact from academics ranged from “rotten” to “extortion”. At the time of writing, none of the nine institutions to which this had initially been floated had given the government an answer.
And that was the quarter that was.

On the face of it, I can understand why the REF team have pressed pause on their guidance development for 2029.
The sector is in serious financial difficulties, and while most are keen to see a greater focus on People, Culture and Environment (PCE), the challenges experienced by pilot institutions with the proposed assessment mechanism were real.
We cannot get away from this.
But of course, where there’s a vacuum, people will rush to fill it with their own pet peeves and theories, up to and including a full reversion to the rules of REF 2021.
One of the biggest fallacies being promoted is this view that PCE is what Iain Mansfield, Director of Research at the Policy Exchange Thinktank, and former Special Adviser, called “a euphemism for Equity Diversity and Inclusion (EDI)”. This conflation of REF PCE with EDI is entirely false. In fact, the PCE pilot included five different enablers of research culture, only one of which related to inclusivity. Of the others (strategy, responsibility, connectivity, and development) two were already themes in REF 2021 Environment Statements (strategy and collaboration) so not exactly a dramatic shift in a whole new direction.
Indeed, the Code of Practice and Environment elements of REF 2021 already placed a significant focus on EDI. Equality Impact Assessments had to be performed at every stage of the submission, EDI training for REF decision-makers was an essential requirement for even submitting to the REF, and both institution- and unit-level environment statements demanded narratives as to how equality and diversity in research careers were promoted across the institution. So anyone seeking a reversion to REF 2021 rules in order to eliminate a focus on EDI is going to be deeply disappointed.
Perhaps the biggest disappointment about this attempt to row back on any deeper focus on research culture in the next REF is that having a thriving research culture is an integral part of any definition of research excellence, whilst being perhaps the second biggest challenge facing the sustainability of the research sector after funding. The Wellcome Trust report, and the Nuffield report that preceded it, taught us that poor incentives, highly competitive & toxic environments, precarious research careers, and unmanageable workloads, are leading to questionable research practices, increased retractions, a loss of talent and reduced trust in science. And all this at a time when we really need more talent and greater trust in science. It wasn’t that long ago that this all led to a Government R&D Culture Strategy making a clear case for better investment in research culture for the benefit of society, but still, in the recent DSIT survey of the UK Research & Development workforce, only 52 per cent of higher education respondents said the culture of their organisation enabled them to perform their best work, compared to 85 per cent in the private sector.
The point of adding greater weight, and a clearer assessment mechanism, to a broader range of culture elements in the next REF was thus to address exactly these issues. As a reminder, the international advisory group for the next REF recommended a split of 33:33:33 for PCE, outputs and impact. Reducing the weight allocated to PCE would not only reduce the attention given to promoting positive research cultures, but actually increase the weighting allocated to the element of REF that is most responsible for driving poor research cultures: publications. We know that the publish-or-perish culture is causing significant problems across the sector. Re-calibrating the assessment to put greater weight on publications would run counter to the Coalition for Advancing Research Assessment’s first commitment: to recognise the diversity of contributions to research.
I do think the pause in REF is an opportunity to think about how we recognise, incentivise and reward the better research cultures we clearly need. I’ve written before about how many elements of our research culture are essentially hygiene factors and as such should not attract gold stars, but be established as a basic condition of funding. There is also an opportunity to supply culture-related data (e.g., research misconduct reporting, and research staff pay gaps) alongside the other environment data already supplied to support REF-decision making. This could be formative in and of itself, as could the use of case studies (a tested REF assessment technology) by which HEIs report on their research culture interventions.
Whatever is decided, no-one working in a research-intensive institution can deny the power of the additional weight allocated to PCE in REF 2029. The knowledge that 25 per cent of the next exercise will be allocated to not just E, but P and C, has naturally been a lever staff have pulled to get culture issues up the agenda. And we’ve seen significant improvements: policy changes, new initiatives, and culture indicators moving in a good direction. So whilst it might feel like an easier move to simply revert back to the rules of REF 2021, there is an opportunity cost to this. A lot has already been invested in preparing institutions for a greater focus on research culture, and more will need to be invested in reverting back to the rules for REF 2021.
Because of the REF’s direct link both to (unhypothecated) gold and (international) glory, nothing really motivates universities more. To row back on efforts to recognise, incentivise, and reward the thriving research cultures that are at the very heart of any ‘excellent’ research institution therefore makes little sense. And it makes even less sense when financial constraints are putting those environments under even more pressure, making it more important than ever that we put people first. Can we do it in a more sensitive and manageable way? Yes, of course. Should we ditch it and run for the cover of REF 2021 rules? Absolutely not.

Hi all. I thought I would take some time to have a chat about how research policy is evolving in other countries, because I think there are some lessons we need to learn here in Canada.
One piece of news that struck me this week came from Switzerland, where the federal government is slashing the budget of the Swiss National Science Foundation (SNSF) by 20%. If the Swiss, a technological powerhouse of a nation, with a broad left-right coalition in power and a more or less balanced budget, are cutting back on science like this, then we might all have to re-think the idea that being anti-Science is just a manifestation of right-wing populism. Higher education as a whole has some thinking to do.
And right now, two countries are in fact re-thinking science quite a bit. In the UK, the new head of UK Research and Innovation (roughly, that country’s One Big Granting Council), has told institutions that they might need to start “doing fewer things but doing them well”, to which the President of Universities UK and vice-chancellor of Manchester Metropolitan University Malcom-Press added that he was “hearing from government is that [they] don’t want to be investing in areas of research where we don’t have the quality and we don’t have the scale.” And, the kicker: “You can’t have hobbyist research that’s unfunded going on in institutions. We can’t afford it.”
Over to Australia, where a few months ago the government set up a Strategic Examination of Research and Development, which released a discussion paper, held consultations and got feedback (which it published) and has now released six more “issue” papers for consultation which detail government thinking in many different and more detailed ways. If this sounds magical to you, it is because you are from Canada, where the standard practice for policymaking is to do everything behind closed doors and treat stakeholders like mushrooms (in the dark with only fecal matter for company) instead of a place where policy-making is treated as a serious endeavour in which public input and expert advice is welcomed.
For today’s purposes however, what matters is not process but policy. The review is seriously considering a number of fairly radical ideas, such as creating a few national “focus areas” for research funding, which would attract higher rates of overhead and requiring institutions to focus their efforts in one of these priority areas via mission-based compacts (which are sort of like Ontario’s Multi-Year Agreements, only they are meaningful) so as to build scale and specialization.
Whew.
One thing that strikes me as odd about both the UK and Australian line of thinking is the idea that institutional specialization matters all that much. While lots of research is done at the level of the individual lab, most “big science” – the stuff people who dream about specialization have in mind when the talk about science – happens in teams which span many institutions, and more often than not across national borders as well. I get the sense that the phenomenon of institutional rankings have fried policy makers’ brains somewhat: they seem to think that the correct way to think about science is at the level of the institution, rather than labs or networks of laboratories. It’s kind of bananas. We can be glad that this kind of thinking has not infected Canadian policy too much because the network concept is more ingrained here.
Which brings me to news here at home.
The rumour out of Ottawa is that in the next few months (still not clear if this is going to be fall 2025 or Spring 2026) there will be an announcement of a new envelope of money for research. But very definitely not inquiry-driven research. No, this is money which the feds intend to spend as part of the increase in “defence” spending which is supposed to rise to 2% of GDP by 2025-2026 and 5% by 2035. So, the kinds of things it will need to go to will be “security”, likely defined relatively generously. It will be for projects in space, protection of critical infrastructure, resiliency, maybe energy production, etc. I don’t think this is going to be all about STEM and making widgets – there will be at least some room for social science in these areas and maybe humanities, too, though this seems to me a harder pitch to make. It is not clear from what I have heard if this is going to be one big pie or a series of smaller pies, divided up wither by mission or by existing granting council. But the money does seem to be on its way.
Now before I go any further, I should point out that I have not heard anyone say that these new research envelopes are actually going to contain new money beyond what was spent in 2024-25. As I pointed out a couple of weeks ago, that would be hard to square with the government’s deficit-fighting commitments.
In fact, if I had to guess right now, the best-case scenario would be that the Liberals will do this by taking some or all of the 88% of the Budget 2024 research commitment to the tri-councils and push it into these new envelopes (worst-case scenario: they nuke the 88% of the 2024 Budget commitment they haven’t yet spent and claw back money from existing commitments to make these new envelopes).
So, obviously no push here for institutional specialization, but where our debate echoes those of the UK and Australia is that all three governments seem to want to shift away from broad-based calls for inquiry driven research and toward more mission-based research in some vaguely defined areas of national priority. I know this is going to irritate and anger many people, but genuinely I don’t see many politically practical alternatives right now. As I said back here: if defending existing inquiry-driven tri-council budgets is the hill the sector chooses to die on, we’re all going to be in big trouble.
No one will forcing individual researchers or institutions to be part of this shift to mission-driven research, but clearly that’s where the money is going to be. So, my advice to VPs Research is: get your ducks in a row on this now. Figure out who in your institution does anything that can even tangentially be referred to as “security-enhancing”. Figure out what kinds of pitches you might want to make. Start testing your elevator pitches. There will be rewards to first movers in this area.

Research has the capacity to transform universities, communities and their places. The problem is that the funding architecture does not allow for sufficient sharing of power, benefits, or resources between communities, academics and non-academic partners.
How research funding is organised, distributed, and managed, spotlights issues of regional inequality and uneven cultural and economic growth.
These challenges, and how to address them, are at the heart of the Northumbria University led deep dive scoping report, By All, For All: The Power of Partnership, which provided the robust evidence base for best practice in partnership working and bridging knowledge gaps.
The report makes recommendations for devolving research power, directly addressing the UKRI strategic aim to work across an expanded research ecosystem, with communities as researchers rather than just the subjects of research.
A new round of Community Innovation Practitioner (CIP) Awards—the signature award of the Creative Communities programme–a £3.9 million investment funded by the UKRI Arts and Humanities Research Council (AHRC) and based at Northumbria University–is a direct result of those recommendations.
Harnessing the transformative power of devolution, the CIP Awards embed researchers directly within communities across all four devolved nations and devolved mayoral regions of the UK for a year. The awards contribute to an emerging evidence base on how culture can enhance belonging, address regional inequality, deliver devolution and break down barriers to opportunity for communities.
Underpinning the work of the CIPs is a fundamental question: what if we stopped doing research to communities and started doing it with them?
The first cohort of CIPs employed co-creative methodologies to address complex social challenges across diverse communities, from Belfast’s Market area to Welsh post-industrial regions and Liverpool’s healthcare settings. Their work aimed to empower marginalised communities through participatory cultural interventions, using arts, heritage, music and creative practices as vehicles for social change and community building.
Each practitioner developed innovative approaches to bridge academic research with grassroots community needs, fostering cross-sector collaborations that challenged traditional boundaries between universities, public services and residents–using the so-called ‘quadruple helix’ model. Through their community-led research, the CIPs demonstrated how creative co-production can tackle issues ranging from mental health and social isolation to heritage conservation and youth engagement, ultimately building more inclusive and resilient communities.
The 2025-6 cohort of CIPs build on that strong foundation. They will generate vital new knowledge about co-creation and the unique role played by their communities and partnerships in growth through new research, development and innovation (RD&I).
Between them, the six CIPs will transform empty retail spaces into creative hubs in Dundee; foster reconciliation in Belfast through a co-created community art exhibition; strengthen community cohesion through craft in Rochdale; address cultural health and creating cultural planning across Kirklees; support cultural regeneration in Digbeth and inspire new forms of collective storytelling in Cardiff.
With UK Government Missions focused on addressing regional inequality and economic growth, there’s growing recognition that top-down policy interventions have limited effectiveness. The CIP Awards directly address this by generating evidence from the ground up, with communities defining both problems and solutions.
This approach aligns with broader shifts in policy thinking. The recent emphasis on place-based innovation across government departments reflects a growing understanding that place really matters—that solutions appropriate for, say, Manchester might fail spectacularly in Dundee, not because of implementation failures but because they were never designed with local lived experience and landscapes in mind.
When it comes to democratising research funding and carrying out co-creation, significant obstacles remain. The promotion criteria in universities still heavily favour traditional academic outputs over community impact. REF panels, despite rhetorical commitments to broader impact, struggle to assess research where communities are co-creators rather than case studies, and funding timelines often clash with the slow work of building genuine partnerships.
The CIP Awards attempt to address some of these structural barriers by providing dedicated funding for relationship-building and requiring evidence of community partnership from the application stage. But systemic change will require broader cultural shifts.
Early indicators from Creative Communities research are promising. Projects have influenced everything from devolved government manifestos, to UNESCO heritage policies and NHS approaches to community health. But scaling this impact requires moving beyond individual projects to a wider systemic change of who gets to do RD&I.
The CIP Awards represent more than a funding opportunity: they’re a prototype for what research looks like when we take community expertise seriously. In an era of declining trust in institutions and growing demands for research relevance, this approach offers a path toward more democratic, more impactful, and ultimately more valuable knowledge creation that is truly by all, and for all.
The Creative Communities podcast is available online CIP Podcast – Creative Communities. You can read more about the work of AHRC Creative Communities on the website, where you can access the case studies and policy papers from the 2023-24 CIPs.

Assessing research culture has always been seen as difficult – some would say too difficult.
Yet as REF 2029 pauses for reflection, the question of whether and how culture should be part of the exercise is unavoidable. How we answer this has the potential to shape not only the REF, but also the value we place on the people and practices that define research excellence.
The push to assess research culture emerged from recognition that thriving, well-supported researchers are themselves important outcomes of the research system. The Stern Review highlighted that sustainable research excellence depends not just on research outputs but on developing the people who produce them. The Harnessing the Metric Tide report built on this understanding, recommending that future REF cycles should reward progress towards better research cultures.
A significant proportion of what we have learnt about assessing research culture came from the People, Culture and Environment indicators project, run by Vitae and Technopolis, and Research England’s subsequent REF PCE pilot exercise. Together with the broader consultation as part of the Future Research Assessment Programme, this involved considerable sector engagement over multiple years.
Nearly 1,600 people applied to participate in the PCE indicators co-development workshops. Over 500 participated across 137 institutions, with participants at all levels of career stage and roles. Representatives from ARMA, NADSN, UKRN, BFAN, ITSS, FLFDN and NCCPE helped facilitate the discussions and synthesise messages.
The workshops confirmed what many suspected about assessing research culture. It’s genuinely difficult. Nearly every proposed indicator proved problematic. Participants raised concerns about gaming and burden. Policies could become tick-box exercises. Metrics might miss crucial context. But participants saw that clusters of indicators used together and contextualised could allow institutions to tell meaningful stories about their approach and avoid the potentially misleading pictures painted by isolated indicators.
A recurring theme was the need to focus on mechanisms, processes and impacts, not on inputs. Signing up for things, collecting badges, and writing policies isn’t enough. We need to make sure that we are doing something meaningful behind these. This doesn’t mean we cannot evidence progress, rather that the evidence needs contextualising. The process of developing evidence against indicators would incentivise institutions to think more carefully about what they’re doing, why, and for whom.
The crucial point that seems to have been lost is that REF PCE never set out to measure culture directly. Instead, it aimed to assess credible indicators of how institutions enable and support inclusive, sustainable, high-quality research.
REF PCE was always intended to be an evolution, not a revolution. Culture has long been assessed in the REF, including through the 2021 Environment criteria of vitality and sustainability. The PCE framework aimed to build on this foundation, making assessment more systematic and comprehensive.
Two issues levelled at PCE have been the sector’s current financial climate and the difficulty of assessing culture fairly across institutional diversity. These are not new revelations. Both were anticipated and debated extensively in the PCE indicators project.
Workshop participants stressed that the assessment must recognise that institutions operate with different resources and constraints, focusing on progress and commitment rather than absolute spending levels. There is no one-size-fits-all answer to what a good research culture looks like. Excellent research culture can look very different across the sector and even within institutions.
This led to a key conclusion: fair assessment must recognise different starting points while maintaining meaningful standards. Institutions should demonstrate progress against a range of areas, with flexibility in how they approach challenges. Assessment needs to focus on ‘distance travelled’ rather than the destination reached.
Research England developed the REF PCE pilot following these insights. This was deliberately experimental, testing more indicators than would ultimately be practical, as a unique opportunity to gather evidence about what works, what doesn’t, what is feasible, and equitable across the sector. Pilot panel members and institutions were co-designers, not assessors and assessees. The point was to develop evidence for a streamlined, proportionate, and robust approach to assessing culture.
REF already recognises that publications and impact are important outputs of research. The PCE framework extended this logic: thriving, well-supported people working across all roles are themselves crucial outcomes that institutions should develop and celebrate.
This matters because sustainable research excellence depends on the people who make it happen. Environments that support career development, recognise diverse contributions, and foster inclusion don’t just feel better to work in – they produce better research. The consultation revealed sophisticated understanding of this connection. Participants emphasised that research quality emerges from cultures that value integrity, collaboration, and support for all contributors.
Some argue that culture is an input to the system that shouldn’t be assessed directly. Others suggest establishing baseline performance requirements as a condition for funding. However, workshop discussions revealed that setting universal standards low enough for all institutions to meet renders them meaningless as drivers of improvement. Baselines are important, but alone they are not sufficient. Research culture requires attention through assessment, incentivisation and reward that goes beyond minimum thresholds.
Patrick Vallance and Research England now have unprecedented evidence about research culture assessment. Consultation has revealed sector priorities. The pilot has tested practical feasibility. The upcoming results, to be published in October, will show what approaches are viable and proportionate.
Have we encountered difficulties? Yes. Do we have a perfect solution for assessing culture? No. But this REF is a huge first step toward better understanding and valuing of the cultures that underpin research in HE. We don’t need all the answers for 2029, but we shouldn’t discard the tangible progress made through national conversations and collaborations.
This evidence base provides a foundation for informed decisions about whether and how to proceed. The question is whether policymakers will use it to build on promising foundations or retreat to assessment approaches that miss crucial dimensions of research excellence.
The REF pause is a moment of choice. We can step back from culture as ‘too hard’, or build on the most substantial sector-wide collaboration ever undertaken on research environments. If we discard what we’ve built, we risk losing sight of the people and conditions that make UK research excellent.

The Knowledge Exchange Framework (KEF) is excellent in all kinds of ways.
It eschews the competitiveness of league tables. It provides a multi-faceted look at everything that is going on in the world of knowledge exchange. And it is nuanced in comparing similar kinds of institutions.
KEF is not overly bureaucratic and it is helpful for universities in understanding where they might improve their knowledge exchange work.
It is a shame then that the release of the KEF dashboard is not as big a day for the sector as something like REF or even TEF.
The KEF is the friend that would help you move house even if it isn’t the first one you would call for a gossip. It is nice, it is helpful, it is realistic on what is and isn’t working. In the very kindest way possible it is straightforward.
The problem is that the nuance of the KEF doesn’t make for sensational coverage. There isn’t an up and down narrative, there aren’t really winners and losers, and of course there is no funding attached. It is a mirror to the world of knowledge exchange that simply shows what is going on.
And if you dig deep enough the stories are good. Queen Mary University of London is doing a superb job at IP and commercialisation as well as public and community engagement all the while generating £760m of GVA. Birmingham Newman University is playing a significant role in local growth and regeneration through partnerships, placements, collaborations and consultancy. While the University of Plymouth has one of the most complete radar diagrams with a distinct focus on its maritime work.
Every single event about how the sector promotes its value discusses the need for universities to have a better story about their places, economic impact, and the tangible impact they make on people’s lives. The KEF is a single source of hundreds of such stories, but somehow it is not cutting through.
Perhaps, one of the reasons is because the consequences of doing badly (whatever badly means in the context of KEF) is very little. It is not the public shaming tool of the TEF, it is not the funding mechanism of REF, and it doesn’t attract very much media attention. It could have been so different. As Jo Johnson, then Science Minister, said at the launch of KEF
Our ambition is that the new KEF will become an important public indicator of how good a job universities are doing at discharging their third mission, just as the REF rewards excellence in research and the TEF rewards excellence in teaching and student outcomes.
The KEF does not reward anything, but it could (yes – its constituent parts are linked to HEIF but that isn’t quite the same thing.)
Another model of funding distribution is possible. One of the major concerns about the REF is that it is becoming too complex. REF measures inputs and outputs, it looks at impact but not in the same way as KEF, and there is also the ongoing debate about People, Culture, and Environment, as a measure of research excellence.
To make the REF more manageable and make the KEF more meaningful perhaps it is time to add funding consequences to KEF and just shift the pressure a little bit. Previously, I have made the argument that one way of doing this would be to rationalise all of the funding mechanisms that bump into KEF:
As a starting point it would be sensible to rationalise HEIF allocations and KEF measurements. Without getting into the weeds at this stage a joint data set would likely draw from an enhanced HE-BCI survey, Innovate UK income, research income, journal data, and non-credit bearing course data from the Office for Students. The most straightforward way would be either to dispense with HEIF entirely and allocate the whole pot to KEF with a strengthened self-assessment element, like in REF, or use KEF as the sole basis for HEIF allocations. This would avoid both double counting funds and reduce administrative burden.
Given the government agenda around universities and economic contribution now might be the time to consider going further.
One measure could be to allocate a proper funding formula to KEF. In keeping with the spirit of KEF each university would still be organised into a cluster, ensuring like for like is being compared, and funding would be allocated on a formula basis depending on their contribution to each of the seven areas. Each area would not have to receive the same level of funding. Instead, the government could vary it from time to time depending on national priorities or alternatively universities could (in advance) make a pitch for their own growth priorities ensuring they devote energy to and are rewarded for where their strengths lie. This would also help with greater specialisation.
Simultaneously, the government could add in a more dynamic competition element that is tied to funding. For example, given the state of the economy it might make sense to provide greater reward for the institutions contributing to local growth and innovation. This then becomes a whole new kind of funding route with funding to support the things universities are good at and a gentle nudge toward the things government wish them to do.
The trade-offs, and the arguments, would of course be significant. In a world of fiscal constraint one of the trade-offs would be reducing funding allocated through REF or through grants in order to fund KEF.
Reducing funding through REF may help to reduce some pressure on it but it isn’t clear that reducing the pot for exploratory research would be a net economic good in the long-term. Reducing grant funding would mean simply trading off one lever to direct research activity for another.
Simultaneously, adding in funding allocations to KEF would undoubtedly make it into a more high-pressure exercise which would then attract costs as universities looked to maximise their returns. The exercise would need to be carefully managed to, as far as possible, rely on public data and limited returns.
Nonetheless, it seems to be a wasted opportunity to have an exercise which is primed for measuring engagements between universities and wider society and economy, at precisely the time there seems to be a consensus this is a good idea, but with few levers to enhance this work. The benefit of looking at a funding allocation toward KEF could be a greater spread of providers rewarded for their work, greater focus on growth and social contribution, and greater attention on the work universities do alongside research and teaching.
The road to a new kind of KEF is long. However, if the debate about REF has taught us anything, it’s that trying to change a single exercise is exceptionally hard. If the current arrangements feel tired, and reform feels piecemeal, perhaps now is the time to look at the whole ecosystem and look at a system which prizes universities third mission as much as their other work.