Category: Research

  • Connecting devolving and prioritising innovation- It’s the Northern Growth Strategy

    Connecting devolving and prioritising innovation- It’s the Northern Growth Strategy

    Of all the things I am proud of in my life I am the most proud of being Northern.

    You have not felt love until you have seen the sun rise over the Tyne Bridge. Life is rendered that bit more vibrant by a visit to Middlesbrough Institute of Modern Art. The North is the place where kindness is a professional sport. To be Northern is to be part of a collective, part of a cultural and economic history that spans from the Darlington and Stockton Railway to the Mercury Prize.

    To be Northern is to have won the lottery of life but it is to have not even been in the draw when it comes to infrastructure investment.

    Dark satanic mills

    The UK is stuck in a deep economic malaise. Productivity is low which means economic output and living standards are also flat-lining. This phenomena is even worse in the North where a vicious cycle of poor investment in innovation assets and infrastructure weakens the case for further investment in innovation assets and infrastructure which in turn further depresses productivity, growth and living standards.

    The regional imbalance in infrastructure investment is not inevitable nor as prominent a feature of many comparable economies. It is a series of deliberate policy decisions that are both structural in the hyper-centralisation of the state which allocates and reallocates its resources to London and the South East, and economically reinforcing through investment in clusters of leading assets. The results of which see UKRI invest 72 per cent more per person in the Greater South East than outside the Greater South East.

    This arrangement is also not a good deal for London. The weak economy across the rest of the country reduces the amount of cash available to invest in London’s leading research assets which in turn depresses growth in the capital, and because of the size of London’s economy, the whole of the economy. Despite the concentration of state spending Londoners also generate far more in tax receipts than London receives in state expenditure.

    The economy cannot grow without improving productivity, productivity will not grow without improving Northern economies, and Northern economies will not improve under the current approach to state spending. A problem at last recognised by the government in the launch of its Northern Growth Strategy.

    We do thing differently here

    There are three planks to the strategy. Investment in transport, business support, and a devolution agenda that combines investment in innovation assets with promises of further devolution.

    There are further plans to come but the agenda, while light, sets out some of the big opportunities that would be genuinely transformative to the North. The first is to improve the educational opportunities for the people in the North and increase the number of graduates that stay there. Unless there are going to be caps for students in the South East (unlikely) this would inevitably mean more not fewer university students. The hope is that retaining graduates, and therefore intellectual capital, would provide an economic boost. This reads more as a wish than a plan. The government has not explained how they will rebalance the economy by moving graduates without any incentives, fewer jobs in the North, and bad infrastructure.

    The wider economic plan relies on realising the benefits of key research assets aligned to the industrial strategy in things like manufacturing, digital, and clean energy. The promise is that there will be national investment in these assets, coupled with improved transport to improve the economic performance of radial cities, allied to wider transport infrastructure to improve connection between Northern cities. The plan is to use government investment to improve economic performance both with and across cities.

    Darlington and Stockton rail

    The transport announcements have captured the headlines. There is evidence in other contexts that good transport links allied to research assets induce spillover benefits The plan, finally backed by Labour’s perennial leadership candidate Andy Burnham, will see investment between Sheffield, Leeds, York, and Bradford followed by a new route between Liverpool and Manchester, and complete with new connections in the Pennines connecting the rest of the North via Darlington. Part of the case for transport investment is that improving connectivity between leading knowledge assets will support economic growth.

    Ultimately, this plan recognises two crucial points about the UK’s economy. The first is that the success of the UK’s knowledge assets is the success of the wider economy. That success is predicated on a better distribution of cash and opportunity. The second is that the North’s potential has been stymied by poor infrastructure. In addressing both the government not only confirms its ambitions for the North but further cements innovation at the heart of its economic plan.

    Source link

  • For a stronger, fairer Wales HE belongs in every manifesto

    For a stronger, fairer Wales HE belongs in every manifesto

    Wales stands on the cusp of significant political change. With an expanded Welsh Parliament and revised voting system, the 2026 Senedd election will mark a new chapter in Welsh democracy.

    May’s election will also be the first where 16- and 17-year-olds can cast their vote. This is a generation whose recent experience of education, and their future university and career aspirations, could be central to the choices they make at the ballot box.

    For those of us working in higher education, these changes present both a challenge and an opportunity. The new proportional voting system will likely result in a more diverse Senedd that will require greater collaboration across parties in order to be effective. For Universities Wales, this means we must continue to engage constructively with all political groups, building consensus around the vital role universities play in shaping a stronger Wales.

    A larger Senedd also means expanded committees and greater capacity for policy scrutiny. This is a welcome development that offers more space for detailed debate on the issues that matter, from economic growth and skills, to research, innovation, and community wellbeing. It also means more elected representatives who can champion higher education.

    Against this context, Universities Wales has launched a manifesto that sets out a clear vision for the future. It is a vision rooted in national renewal; one that sees universities as the essential infrastructure needed for Wales to thrive. Our message is simple: when universities succeed, Wales succeeds.

    Building jobs and skills

    In an age of rapid economic and technological change, Wales’ economy demands a flexible and highly skilled workforce. With Wales estimated to need 400,000 more graduates by 2035, universities will be central to supporting the next Welsh Government in meeting future economic needs and building a more skilled and prosperous nation.

    However, delivering on this ambition will require greater recognition of the role universities already play in delivering skills – including through the degree apprenticeships system – alongside a renewed focus on financial sustainability.

    A sustainable university sector is key to unlocking investment, productivity, and growth across Wales. Given recent challenges, an independent review of university funding and student support will be an essential step in ensuring universities can continue to deliver for Wales, now and into the future.

    Driving opportunity

    Wales’ future prosperity depends on our ability to nurture talent and equip people with the skills to thrive in a fast-moving world. Graduates are the backbone of our economy and the drivers of our future success. Put simply, there will be no growth without graduates.

    However, in Wales, we are seeing a worrying decline in the percentage of 18-year-olds choosing to go to university.

    We cannot afford to keep recycling old arguments about the value of a university education. We need to be stronger in demonstrating its essential role in shaping future prospects. If we fail, we risk leaving the next generation less qualified and with fewer pathways to success.

    Taking action to understand and reverse this trend through an independent commission on participation could unlock the potential of thousands of people, upskilling the economy and driving social mobility.

    Supporting research, innovation and local growth

    Equally as important is ensuring there is recognition and appropriate support for the full spectrum of work carried out by our universities, both here at home and through their international activities, which strengthen Wales’ global presence and influence.

    For example, while university research and innovation benefits people, business and public services across the nation and beyond, it is an area that continues to be significantly underfunded; pro-rata to population size, in 2024–25, the funding allocations made by HEFCW (now Medr) for R&I in Wales were £57m lower than those made by Research England for England, and £86m lower than in Scotland.

    Consequently, our manifesto pushes for greater investment in research, innovation and commercialisation within the current system of R&I funding. This means increases to QR funding, as well as further investment through the Research Wales Innovation Fund. This will be crucial to unlocking productivity and growth across all parts of Wales.

    We are also calling for greater support for the important work universities do within their communities to drive economic growth, attract investment, support public services, and shape the places where people live, work and thrive.

    The cliff-edge of funding caused by the loss of EU Structural Funds – which Wales particularly benefitted from – and the inadequacy of replacement funding, has had a detrimental impact on universities’ activity in this area. This is why long-term regional investment funding, channelled through the Welsh government, will be vital to supporting universities’ roles as anchor institutions, and encouraging private co-investment.

    Wales’ national renewal

    These priorities are not partisan. Every political party wants to see a thriving, prosperous Wales – and that vision depends on a strong, resilient and effective university sector. We know that the next Welsh government, whatever its composition, will face tough choices. But investing in universities is not a luxury, it is a strategic necessity that strengthens our economy, builds resilience, and transforms lives.

    As chair of Universities Wales, I believe our sector stands ready to play a central role in Wales’ future. The political system may be shifting, but our aim remains the same: to support a strong, fair, and successful Wales. This is a pivotal moment for our sector and for the nation. Now is the time to recognise the full value of Welsh universities,­ and to place them at the heart of Wales’ national renewal.

    Source link

  • Are we reducing research security risk, or just shifting it around?

    Are we reducing research security risk, or just shifting it around?

    In an era of heightened geopolitical tension, research security has shot to the top of policy agendas worldwide. Governments and institutions are implementing new measures intended to safeguard sensitive science against threats like espionage, theft, and undue foreign influence.

    The Flagship EU Conference on Research Security, held recently in Brussels, underscored the urgency: for the first time, the European Union announced plans to anchor research security in EU law via a forthcoming European Research Area. It also confirmed proposals for a range of new support measures including a European Centre of Expertise, an international collaboration due diligence platform, and a common resilience testing methodology.

    Yet amid these proactive steps lurks a critical question: are current research security frameworks genuinely reducing risk, or merely redistributing it across borders? There is growing evidence that without careful coordination, well-intentioned safeguards in one country can simply deflect threats to less-regulated arenas. In its recent note on “Research Security as a Shared Responsibility”, conference co-organiser CESAER noted the need to build resilience in Europe through “collective responsibility and trust.” It emphasised that “making a level playing field across the continent” is essential. But why should the level playing field stop there?

    The waterbed effect

    Across the world – and even within Europe – research security frameworks vary wildly. This fragmentation is more than just a bureaucratic quirk; it can actively undermine the intention to reduce risk. If one institution or country imposes rigorous security checks, a hostile actor can simply target a more permissive collaborator elsewhere, bypassing the tightest gate by entering through an unlocked side door.

    Research managers from across European countries and beyond recently voiced a clear message through the “Stronger Cooperation, Safer Collaboration” project: divergent national approaches are creating duplication, confusion, and vulnerability in research security. Some nations have strict regulatory frameworks; others rely on informal guidelines and self-regulation, and some have yet to implement any framework at all. This disharmony forces collaborating institutions to navigate a patchwork of rules. Crucially, it creates a race to the bottom: “The first to act loses out,” as one research manager put it, meaning institutions that impose tougher controls risk losing collaborations or talent that underpin their institutions impact and financial resilience. Conversely, overly open environments risk becoming safe havens for those trying to evade stricter jurisdictions, leading to longer term losses through knowledge leakage from the same global collaborative projects.

    This dynamic has played out in anecdotal reports: one trusted research manager at a research-intensive university in the UK shared that they had experienced a recent case in which a PhD candidate who had unsuccessfully appealed an Academic Technology Approval Scheme (ATAS) refusal told their UK institution not to worry, as they had received an offer from elsewhere in Europe. Colleagues elsewhere in the UK and in Denmark confirmed similar experiences – Denmark and the UK being two countries now taking a firm line on vetting international research ties.

    The pattern highlights a potential unintended consequence: was the risk eliminated, or was it shifted to another institution? It raises the question as to whether early inward-facing approaches have inadvertently created a “waterbed effect”: press down on risk in one place, and it pops up elsewhere, undermining the overall goal of a safer global research environment.

    Shifting risk to the Global South

    The “risk transfer” phenomenon in research security isn’t just a North Atlantic or European problem. It can play out globally, often to the detriment of researchers in the Global South. Many high-income countries (such as the US, UK, Canada, Australia, and some EU states) have ramped up protections for their own institutions. This includes stricter export controls on sensitive technologies, visa vetting of foreign researchers, requirements for disclosure of overseas ties, and due diligence on international partners. But those seeking access to advanced research can respond by targeting less fortified partners in countries where such measures are not yet in place or enforced.

    This dynamic means that Global South collaborators sometimes become passive recipients of risk. I spoke with Dr Palesa Natasha Mothapo, Director of Research Support and Management of Nelson Mandela University and an alumnus of the Women Advance Research Security Fellowship, who has led initiatives to engage institutions in South Africa and beyond on research security. She noted that South Africa has a thriving research and innovation ecosystem with highly sensitive research, but discussions on research security remain at a very early stage. Even so, Mothapo noted that institutions in South Africa generally benefit from greater financial security due to national investment and infrastructure and colleagues from elsewhere in the Global South feel even more exposed to the risks.

    When working with international funders, institutions are often forced to accept onerous funding terms and conditions set by wealthier partners and conditions which aim to shift responsibility and liability downward. Those terms and conditions have often not been formulated fully considering the local context or capacity. For example, a major research funding agreement from a US or European sponsor might require the African or Asian sub-grantee to comply with strict cybersecurity protocols, international export-controls or vetting of staff. Lacking an equal say in drafting these terms, the partner institution does its best to comply, effectively shouldering the security burden – but it may not have the inhouse experts, resources or infrastructure that its counterparts are able to rely on. But if something goes wrong, who bears the blame or consequences? If our actions only result in shifting the blame but fail to mitigate the likelihood or consequences, they have failed altogether. This inequity can erode trust and perpetuate harm.

    To counteract this erosion, changes in terms and conditions need to accompanied by the capacity strengthening, partnership and co-creation that accounts for what each collaborator values and seeks to protect. In the last three years, I have worked with researchers, research managers, innovation professionals and policy makers from over 50 different countries on capacity strengthening in research security. While the contexts vary greatly, there are still commonalities in the challenges we face and significant opportunity for cooperation and knowledge exchange. Raising standards everywhere is not a zero-sum game but creates a more stable, level playing field for all. This is the solution to truly reduce risk globally, instead of shifting it around.

    Towards harmonisation and mutual support

    If current research security measures risk shifting problems around, what is the remedy? The experts and stakeholders convened in Europe and elsewhere seem to converge on a key principle: harmonisation and capacity-building. Rather than each country acting in isolation (or worse, in competition) on research security, there’s a call for joint action to raise the floor globally and key actions have begun in this direction.

    There is also a growing recognition that culture change is as important as policy change. The concept of research security is relatively new in academia’s culture of openness. We need to foster a culture where security is seen not as a hindrance or a nationalist agenda, but as a shared duty to protect the integrity of science. That means those implementing security must do so in a way that is transparent and respects values like academic freedom and open science.

    To return to our original question: are we actually reducing risk or just shifting it elsewhere? At present, the answer is: a bit of both. The flurry of research security policies in recent years has plugged many gaps that were previously exploitable. Major economies are certainly harder targets for espionage and IP theft than they were a decade ago, thanks to these efforts.

    However, as protections evolve so do threats and tactics and there is little room for complacency. Some of those same efforts have diverted actors to take different approaches, including in some cases exporting the risk to less prepared quarters, or creating new frictions in the research enterprise. A chain is only as strong as its weakest link, and right now the “chain” of global science has some weak links open to exploitation. The good news is that the solution is within reach through international cooperation.

    We reduce research security risk only when we reduce it for everyone. If instead we simply push the risk around, it will eventually circle back and hit us from behind. The current trends – increased awareness, dialogue, and alignment – give reason for optimism. The UK government has indicated that international capacity strengthening will form part of their anticipated research security strategy.

    The next few years will be critical in translating these insights into practice. If we succeed, we will be on track to celebrate a genuinely safer, more collaborative global research environment – one where risk is tackled collectively, not passed like a hot potato.

    Source link

  • What school leaders need to know

    What school leaders need to know

    Key points:

    Special education is at a breaking point. Across the country, more children than ever are being referred for evaluations to determine whether they qualify for special education services. But there aren’t enough school psychologists or specialists on staff to help schools meet the demand, leaving some families with lengthy wait times for answers and children missing critical support. 

    The growing gap between need and capacity has inspired districts to get creative. One of the most debated solutions? Remote psychoeducational testing, or conducting evaluations virtually rather than face-to-face. 

    Can a remote evaluation accurately capture what a child needs? Will the results hold up if challenged in a legal dispute? Is remote assessment equivalent to in-person? 

    As a school psychologist and educational consultant, I hear these questions every week. And now, thanks to research and data released this summer, I can answer with confidence: Remote psychoeducational testing can produce equivalent results to traditional in-person assessment. 

    What the research shows

    In July 2025, a large-scale national study compared in-person and remote administration of the Woodcock-Johnson V Tests of Cognitive Abilities and Achievement (WJ V), the latest version of one of the most widely-used and comprehensive assessment systems for evaluating students’ intellectual abilities, academic achievement, and oral language skills. Using a matched case-control design with 300 participants and 44 licensed school psychologists from across the U.S., the study found no statistically or practically significant difference in student scores between in-person and remote formats. 

    In other words: When conducted with fidelity, remote WJ V testing produces equivalent results to traditional in-person assessment.

    This study builds on nearly a decade of prior research that also found score equivalency for remote administrations of the most widely used evaluations including WJ IV COG and ACH, RIAS-2, and WISC-V assessments, respectively. 

    The findings of the newest study are as important as they are urgent. They show remote testing isn’t just a novelty–it’s a practical, scalable solution that is rooted in evidence. 

    Why it matters now

    School psychology has been facing a workforce shortage for over a decade. A 2014 national study predicted this crunch, and today districts are relying on contracting agencies and remote service providers to stay afloat. At the same time, referrals for evaluations are climbing, driven by pandemic-related learning loss, growing behavioral challenges, and increased awareness of neurodiversity. 

    The result: More children and families waiting longer for answers, while school psychologists are facing mounting caseloads and experiencing burnout. 

    Remote testing offers a way out of this cycle and embraces changes. It allows districts to bring in licensed psychologists from outside their area, without relocating staff or asking families to travel. It helps schools move through backlogs more efficiently, ensuring students get the services they need sooner. And it gives on-site staff space to do the broader preventative work that too often gets sidelined. Additionally, it offers a way to support those students who are choosing alternate educational settings, such as virtual schools. 

    Addressing the concerns

    Skepticism remains, and that’s healthy. Leaders wonder: Will a hearing officer accept remote scores in a due process case? Are students disadvantaged by the digital format? Can we trust the results to guide placement and services?

    These are valid questions, but research shows that when remote testing is done right, the results are valid and reliable. 

    Key phrase: Done right. Remote assessment isn’t just a Zoom call with a stopwatch. In the most recent study, the setup included specific safeguards:

    • Touchscreen laptops with screens 13” or larger; 
    • A secure platform with embedded digital materials;
    • Dual cameras to capture the student’s face and workspace;
    • A guided proctor in-room with the student; and
    • Standardized examiner and proctor training protocols.

    This carefully structured environment replicates traditional testing conditions as closely as possible. All four of the existing equivalency studies utilized the Presence Platform, as it already meets with established criteria.

    When those fidelity conditions are met, the results hold up. Findings showed p-values above .05 and effect sizes below .03 across all tested subtests, indicating statistical equivalence. This means schools can confidently use WJ V scores from remote testing, provided the setup adheres to best practices.

    What district leaders can do

    For remote testing to succeed, schools need to take a thoughtful, structured approach. Here are three steps districts can take now.

    1. Vet providers carefully. Ask about their platform, equipment, training, and how they align with published research standards. 
    2. Clarify device requirements. Ensure schools have the right technology in place before testing begins.
    3. Build clear policies. Set district-wide expectations for how remote testing should be conducted so everyone–staff and contractors alike–are on the same page. 

    A path forward

    Remote assessment won’t solve every challenge in special education, but it can close one critical gap: timely, accurate evaluations. For students in rural districts, schools with unfilled psychologist positions, virtual school settings, or families tired of waiting for answers, it can be a lifeline.

    The research is clear. Remote psychoeducational testing works when we treat it with the same care and rigor as in-person assessment. The opportunity now is to use this tool strategically–not as a last resort, but as part of a smarter, more sustainable approach to serving students. 

    At its best, remote testing is not a compromise; it’s a path toward expanded access and stronger support for the students who need it most.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • A more focused research system does not by itself solve structural deficits

    A more focused research system does not by itself solve structural deficits

    Financial pressures across the higher education sector have necessitated a closer look at the various incomes and associated costs of the research, teaching and operational streams. For years, larger institutions have relied upon the cross-subsidy of their research, primarily from overseas student fees – a subsidy that is under threat from changes in geopolitics and indeed our own UK policies on immigration and visa controls.

    The UK is now between a rock and a hard place: how can it support the volume and focus of research needed to grow the knowledge-based economy of our UK industrial strategy, while also addressing the financial deficits that even the existing levels of research create?

    Several research leaders have recently been suggesting that a more efficient research system is one where higher education institutions focus on their strengths and collaborate more. But while acknowledging that efficiency savings are required and the relentless growth of bureaucracy – partly imposed by government but also self-inflicted within the HEIs – can be addressed, the funding gulf is far wider than these savings could possibly deliver.

    Efficiency savings alone will not solve the scale of structural deficits in the system. Furthermore, given that grant application success rates are systemically below 20 per cent and frequently below ten or even five per cent, the sector is already only funding its strongest applications. Fundamentally, currently demand far outstrips supply, leading to inefficiency and poor prioritisation decisions.

    Since most of the research costs are those supporting the salaries and student stipends of the researchers themselves, significant cost-cutting necessitates a reduction in the size of the research workforce – a reduction that would fly in the face of our future workforce requirement. We could leave this inevitable reduction to market forces, but the resulting disinvestment will likely impact the resource intensive subjects upon which much of our future economic growth depends.

    We recognise also that solutions cannot solely rely upon the public purse. So, what could we do now to improve both the efficiency of our state research spend and third-party investment into the system?

    What gets spent

    First of all, the chronic underfunding of the teaching of UK domestic students cannot continue, as it puts even further pressure on institutional resources. The recent index-linking of fees in England was a brave step to address this, but to maintain a viable UK research and innovation system, the other UK nations will also urgently need to address the underfunding of teaching. And in doing so we must remain mindful of the potential unintended consequences that increased fees might have on socio-economic exclusion.

    Second, paying a fair price for the research we do. Much has been made of the seemingly unrestricted “quality-related” funding (QR, or REG in Scotland) driven by the REF process. The reality is that QR simply makes good the missing component of research funding which through TRAC analysis is now estimated to cover less than 70 per cent of the true costs of the research.

    It ought to be noted that this missing component exists over all the recently announced research buckets extending across curiosity-driven, government-priority, and scale-up support. The government must recognise that QR is not purely the funding of discovery research, but rather it is the dual funding of research in general – and that the purpose of dual funding is to tension delivery models to ensure HEI efficiency of delivery.

    Next, there is pressing a need for UKRI to focus resource on the research most likely to lead to economic or societal benefit. This research spans all disciplines from the hardest of sciences to the most creative of the arts.

    Although these claims are widely made within every grant proposal, perhaps the best evidence of their validity lies in the co-investment these applications attract. We note the schemes such as EPSRC’s prosperity partnerships and their quantum technology hubs show that when packaged to encompass a range of technology readiness levels (TRL), industry is willing to support both low and high TRL research.

    We would propose that across UKRI more weighting is given to those applications supported by matching funds from industry or, in the case of societal impact, by government departments or charities. The next wave of matched co-funding of local industry-linked innovation should also privilege schemes which elicit genuine new industry investment, as opposed to in-kind funding, as envisaged in Local Innovation Partnership Funds. This avoids increasing research volume which is already not sustainable.

    The research workforce

    In recent times, the UKRI budgets and funding schemes for research and training (largely support for doctoral students) have been separated from each other. This can mean that the work of doctoral students is separated from the cutting-edge research that they were once the enginehouse of delivering. This decoupling means that the research projects themselves now require allocated, and far more expensive, post-doctoral staff to deliver. We see nothing in the recent re-branding of doctoral support to “landscape” and “focal” awards that is set to change this disconnect.

    It should be acknowledged that centres for doctoral training were correctly introduced nearly 20 years ago to ensure our students were better trained and better supported – but we would argue that the sector has now moved on and graduate schools within our leading HEIs address these needs without need for duplication by doctoral centres.

    Our proposal would be that, except for a small number of specific areas and initiatives supported by centres of doctoral training (focal awards) and central to the UK’s skills need, the normal funding of UKRI-supported doctoral students should be associated with projects funded by UKRI or other sources external to higher education institutions. This may require the reassignment of recently pooled training resources back to the individual research councils, rebalanced to meet national needs.

    This last point leads to the question of what the right shape of the HEI-based research-focused workforce is. We would suggest that emphasis should be placed on increasing the number of graduate students – many of whom aspire to move on from the higher education sector after their graduation to join the wider workforce – rather than post-doctoral researchers who (regrettably) mistakenly see their appointment as a first step to a permanent role in a sector which is unlikely to grow.

    Post-doctoral researchers are of course vital to the delivery of some research projects and comprise the academic researchers of the future. Emerging research leaders should continue to be supported through, for example, future research leader fellowships, empowered to pursue their own research ambitions. This rebalancing of the research workforce will go some way to rebalancing supply and demand.

    Organisational change

    Higher education institutions are hotbeds of creativity and empowerment. However, typical departments have an imbalanced distribution of research resources where appointment and promotion criteria are linked to individual grant income. While not underestimating the important leadership roles this implies, we feel that research outcomes would be better delivered through internal collaborations of experienced researchers where team science brings complementary skills together in partnership rather than subservience.

    This change in emphasis requires institutions to consider their team structures and HR processes. It also requires funders to reflect these changes in their assessment criteria and selection panel working methods. Again, this rebalancing of the research workforce would go some way to addressing supply and demand while improving the delivery of the research we fund.

    None of these suggestions represent a quick fix for our financial pressures, which need to be addressed. But taken together we believe them to be a supportive step, helping stabilise the financial position of the sector, while ensuring its continuing contribution to the UK economy and society. If we fail to act, the UK risks a disorderly reduction of its research capability at precisely the moment our global competitors are accelerating.

    Source link

  • A new model moves research in a more democratic direction – It could be faster and fairer too

    A new model moves research in a more democratic direction – It could be faster and fairer too

    Research quality is the dark matter of the university sector. It is hard enough to assess research after it has been done, research funders must find some way to evaluate proposals for projects which don’t exist yet. The established model for this is external expert review, combined with a panel stage where proposals, and their reviews are discussed, and hard choices made.

    UK researchers will be familiar with this via our own UKRI, and everyone who has had a funding application rejected will recognise that the reviews received may be partial or mis-directed. This speaks to the idiosyncrasy and variability in individual judgments of what makes a good project, and has downstream consequences for what ultimately gets funded.

    Research from the Dutch research council published last year showed what everyone suspected – two panels making the decision about the same proposals would end up funding different projects. The results were better than complete random selection, but not by much.

    The capriciousness in funding awards has even led some to propose selecting by lottery among proposals judged to be eligible – a procedure known as partial randomisation and currently being trialled by a number of funders, including the British Academy.

    Pressure

    Issues with grant review aren’t limited to variability between individual reviewers. The pressure on researchers to win funding is driving an increased number of applications, at the same time as funders report it being harder and harder to identify and recruit reviewers. One major UK funder privately reports that they have to send around 10 invitations to obtain one review. Once received, quality of reviews can be variable. Ideally the reviewer is both disinterested and expert in the topic of the proposal (two factors which are inherently in tension), but scarcity of reviewers often leaves funders forced to rely on a minority of willing reviewers. At the same time many researchers are submitting applications for funding without reciprocating by providing reviews. These same issues of peer review are similar to those that beset journal publishing, but in research funding the individual outcomes are far more consequential for careers (and budgets).

    A model of funding evaluation which promises to address at least some of these issues is distributed peer review (DPR). Under DPR, applicants for a funding scheme review each other’s proposals. It’s an idea that originated in the astronomy community, where proposals are evaluated to allocate scarce telescope time (rather than scarce funding), and it is also common for conference papers, particularly in computer science, but the application to evaluating proposals for funding is still in its infancy.

    At the Research on Research Institute (RoRI) we have a mission to support funders to become more experimental in their approach – to both use strong evidence on what can work in the funding system, but also to run experiments to generate that evidence themselves. A core member of the international consortium of 19 funders which funds RoRI is the Volkswagen Foundation, a private German funder (and completely independent of the car manufacturer).

    When they decided to trial distributed peer review, running a parallel comparison of DPR and their standard process of external review and decision by an expert panel, we were able to partner with them to provide independent scientific support for the experiment. The result is a side by side comparison of how the two processes unfolded, how long they took, how they were experienced by applicants and which proposals got funded.

    Positive expectations

    Our analysis showed that before they took part, applicants mostly had positive expectations of the process. Each proposal was assessed by both methods, and eligible to be funded if selected by either method. When the results came in, we saw some overlap between the proposals funded under DPR and by the standard panel processes. The greater number of reviews per proposal also allowed the foundation to give considerable feedback to applicants, and allowed us greater statistical insight into proposal scoring. Our analysis showed that no number of reviews would make the DPR process completely consistent (meaning we should expect different proposals to be funded if it was run again, or if it was compared to the panel process). Many applicants enjoyed the insight reviewing other proposals gave them into the funding processes, and appreciated the feedback they got (although, as you would expect this was not universal, and applicants who ended up being awarded funding were happier with the process than those who weren’t). From the foundation’s perspective it seems DPR is feasible to run, and – if run without the parallel panel stage – would allow a large reduction in the time between the application deadline and the funding award.

    It’s an incredibly rich data set, and we are delighted the foundation has committed to running – and evaluating – the DPR process over a second round. This will allow us to compare across different rounds, as well between the evaluation by DPR and by the panel process.

    DPR represents an innovation for funding evaluation, but one that builds on the fundamental principle of peer review by researchers. The innovation is to move funding evaluation in a more democratic direction, away from the ‘gatekeeping’ model of review by a small number of senior researchers who are privileged to sit on funder’s review panels. It ensures an equal distribution of reviewing work – everyone who applies has to review, and as a consequence widens and diversifies the pool of people who are reviewing funding applications. The Foundation’s experience shows that DPR can be deployed by a funder, and the risks and complaints – of unfair reviews, unfair scoring behaviour and extra work required of applicants – managed.

    Flaws and comparisons

    Ultimately, the judgement of DPR must be on how it performs against other funding evaluation processes, not on whether it is free of potential flaws. There definitely are issues with DPR, which we have tried to make clear in our short guide for funders who are interested in adopting the procedure. These include if, and how, DPR can be applied to calls of different sizes and if proposals require specialist review which is beyond the expertise of the cohort applying. A benefit of DPR is that it scales naturally (when there are more applications there are, by definition, more available applicant-reviewers). The issue of how appropriate DPR is for schemes where proposals cover very different topics is a more pressing one. It may not be right for all schemes, but DPR is a promising tool in the funding evaluation toolkit.

    Source link

  • The great UKRI budget shake-up

    The great UKRI budget shake-up

    UKRI has two functions. The first is to coordinate the work of seven research councils to improve research quality, impact, and infrastructure. The second is to use this convening power to achieve social good such as economic growth. The National Audit Office criticised the impact of UKRI against both of these missions.

    The standard approach of UKRI has been to fund blue-sky research, things that universities and others do that push the boundaries of accepted knowledge, and to fund a portfolio of other projects, buildings, and people, to achieve a broader set of missions shaped by DSIT.

    The forever tension is that this approach can lead to a great sprawl. The internal competition to establish grants underneath each research council requires a great degree of internal coordination. The bidding process for these grants is even sprawlier still. And there is no guarantee that blue-sky research will produce the kinds of things the government wants in order to achieve its mission of economic growth.

    Until now, research funding has been the story of nudges toward the things government wants through bodies it influences but does not control, and through setting the legal and reporting guardrails for the train of unrestricted and unhypothecated research funding largely allocated through QR. This is now going to significantly change.

    Bucketing down

    UKRI’s budget allocation process is the single most powerful tool it has to shape the research ecosystem.

    Today’s new settlement for the next four years of research investment has gone all in on developing cross-disciplinary funding to meet government priorities such as the industrial strategy, targeted investment in key technologies, protecting curiosity-led research, and significant increases to skills and infrastructure. It is funding that follows a government’s plan, and it’s also a marked shift in how the funder operates as an organisation.

    One instructive way in to what’s going on is to compare the newly published allocations explainer to the one covering 2025–26. That previous document was a slim six-page, 1000-word canter through how much each of the funding councils was getting, in essence. UKRI’s new allocations for the rest of the spending review period are a very different beast.

    First up, we’re told that it is “not possible to directly compare these allocations to previous budgets,” such is the nature of the overhaul. And while this sounds like it could be spin to distract from subtle cuts in less politically trendy areas, it is basically true – the whole budget process has been reimagined. It’s also worth observing from the get-go that the generous overall R&D spending review settlement makes it much easier to get away with these big and potentially thorny changes – compare the prompt announcement here with the ongoing wait for news about how the Office for Students’ strategic priorities grant will be reformed.

    In headline terms, it should come as little surprise to see the “bucket theory” front and centre – this had already been established by the Liz Kendall and Ian Chapman speeches last month. To recap, though, overall across the four years there is £14.5bn for curiosity-driven, foundational research (Bucket 1), £8.3bn for targeted R&D addressing strategic government and societal priorities (Bucket 2), and £7.4 billion to support innovative companies’ growth (Bucket 3), as well as £8.4bn for what is basically a fourth bucket, “enabling and strengthening UK R&D”.

    What we see today is that while Bucket 1 will be the largest part of the overall settlement, the increases on offer are located elsewhere – the exact figures are tricky to definitively pinpoint, given how certain elements are slowly moved from one bucket to another over the four years.

    Most surprising is how fundamentally the new way of thinking about what UKRI funds translate into research council settlements. The only per-council announcements we get are for applicant-led research, where each council is seeing increases over the period. It’s tempting to try to draw lines back to previous settlements – but it fundamentally doesn’t work like this.

    For buckets 2 and 3, there is no breakdown by funding council. Rather, each industrial strategy area gets its own separate item (in fact, for the digital and technologies sector, it’s split into four: engineering biology, AI, quantum, and the other stuff). The majority of the investment in bucket 2 for these areas “will be delivered by research councils,” we are advised – but this will be a separate process. Aside from specific investments such as the R&D Missions Programme and the Edinburgh supercomputer, this will flow via programmes, each led by an executive chair but described clearly as cross-UKRI.

    Over in bucket 3 we can find HEIF, but much of the rest will be run through Innovate UK, with a growing focus on industrial strategy sectors. After plenty of debate within the sector about where QR should sit, it’s firmly in bucket 1 despite some suggestions that this would both misunderstand its role as a flexible fund and leave it more at risk to future cuts. The UKRI thinking is that basically QR is not government-directed, and therefore it goes in the first bucket.

    Elsewhere we see a substantial investment in the collective talent doctoral funding line item (up to more than £800m next year and over £900m by 2028–29). And we understand that other doctoral funding could come from, for example, bucket 2 cash where linked to industrial strategy priorities.

    A single mission

    UKRI chief executive Ian Chapman describes the budget as being aligned to a “single mission”. He’s talking about the mission of advancing knowledge, improving lives and driving growth – but there’s also a clear sense that the way in which the funding landscape is being restructured gives a much firmer central UKRI steer regarding what gets spent and why, with the role of the funding councils, and Research England, more focused on delivery and detail.

    The role of the industrial strategy in choosing what research investment will be made is even more prominent than many will have expected. Predictably, it’s also very lopsided – AI-related programmes will swallow £400m a year by the end of the decade, while other areas see much less frugality.

    Whether this focus on the IS-8 sectors will translate through to choices about where funding gets invested, as we looked at earlier this week, remains to be seen. But the other issue with the industrial strategy lens, one that as the decade progresses will come into ever sharper focus, is what this will mean for the year after the spending review period, when a new government is likely and other priorities will suddenly have to be accommodated.

    For now, it’s a big ambitious reordering of how research money gets invested, which will have to be reflected within UKRI and its component parts, as they are being asked to work in different ways and pursue fundamentally different goals.

    Source link

  • Canada launches CAD$1.7bn investment to recruit 1,000 global researchers

    Canada launches CAD$1.7bn investment to recruit 1,000 global researchers

    The Global Impact+ Research Talent Initiative will fund new research chairs, early-career posts, and infrastructure upgrades across universities to draw in leading academics from overseas and Canadian researchers currently working abroad. 

    “[The] investment is about securing Canada’s place at the forefront of discovery and innovation and leveraging our strength in science to support our future well-being and prosperity for generations to come,” said Canadian minister of industry Melanie Joly, announcing the program.  

    Through recruiting top talent, the program aims to “deliver direct economic, societal and health benefits for Canadians,” she stated.  

    The U15 group of Canada’s leading research-intensive universities welcomed the details of the investment, which was initially put forward in the government’s 2026-28 Immigration Levels Plan last month.  

    Robert Asselin, U15’s CEO, described the initiative as a “call to action” to make Canada a world-leading hub for research and innovation. 

    “This is a significant step which recognises that Canada’s security and economic success depend on supporting highly qualified talent with the ideas and expertise to deliver bold new discoveries,” he said.  

    Policymakers said the initiative was one of the largest recruitment programs of its kind in the world, with minister of health Majorie Michel emphasising the tangible benefits to Canada’s healthcare system.  

    “Better healthcare begins with better research. And in Canada, we believe in science. We value our scientists.” 

    “These investments will attract the best and brightest in the world, including Francophone researchers. This is the exact talent we need to drive better healthcare outcomes for Canadians and grow the Canadian economy,” Michel declared. 

    This is the exact talent we need to drive better healthcare outcomes for Canadians and grow the Canadian economy

    Majorie Michel, Canadian Minister of Health

    The investment will be split across four funding streams. The Canada Impact+ Research Chairs program has been allocated the bulk of the investment and is set to receive CAD$1bn over 12 years to help universities attract world-leading international researchers.  

    Meanwhile, the Canada Impact+ Emerging Leaders program will use CAD$120 million over 12 years to bring international early-career researchers to the country and expand the research talent pool with “fresh ideas and diverse perspectives”. 

    Two additional funds of CAD$400m and CAD$130m respectively, will be used to strengthen research infrastructure and provide training to support doctoral students and researchers relocating to Canada.  

    Recruitment will focus on fields such as artificial intelligence, health, clean technology, quantum science, environmental resilience, democratic resilience, manufacturing, defence, and cybersecurity. 

    Karim Bardeesy, parliamentary secretary to the minister of industry, said at the announcement: “We need to invite the best and brightest from around the world and those Canadians abroad to come and do that work here in Canada.” 

    The initiative comes as Canada plans to reduce new international study permits by more than 50% in 2026, driven by wider federal efforts to reduce Canada’s temporary resident population to less than 5% of the total by the end of 2027. 

    Delivering Canada’s 2025 budget in November, finance minister Francois-Philippe Champagne said the measures were designed to give the government greater control over the immigration system and bring immigration back to “sustainable levels”. 

    The government has said immigration measures will be targeted to specifically boost the scientific benefits for Canada, such as through increasing the country’s supply of doctors as part of a new International Talent Attraction Strategy and Action Plan. 

    Source link

  • REF 2029 talks about people again but early career labour is still hard to see

    REF 2029 talks about people again but early career labour is still hard to see

    REF 2029 guidance now confirms that the previously proposed people, culture and environment (PCE) element has been renamed strategy, people and research environment (SPRE). Its weighting has been set at 20 per cent, while the main contributions to knowledge and understanding element will make up 55 per cent of the overall profile. Compared with REF 2021, outputs no longer carry the 60 per cent weighting they once did, and the environment component has increased from 15 to 20 per cent.

    Supporters of the change, including Wellcome’s John-Arne Røttingen, have been clear that this is not intended as a downgrading of research culture, instead describing the move as a rebrand designed to prevent “culture” becoming politicised, and as a way of preserving the momentum of efforts to improve research environments.

    For early-career academics at the most insecure end of the system, however, research labour still sits outside what is easiest to count. What resists straightforward counting is also what is least likely to be protected.

    Hidden research expectations

    I am one year out of my PhD, in which I explored the “care-full” and “careless” dimensions of academic work. I graduated expecting that the next few years would involve short-term teaching, fractional contracts or, if things went well, fixed-term research roles. I also entered this stage of my working life knowing that, whatever job I took, I would need to keep publishing to stand any real chance of staying in higher education.

    I write this with short-term teaching arrangements in mind. Within these roles, there is an unspoken contradiction. Many teaching contracts formally exclude research. At the same time, research remains a condition of future employability. It appears in shortlisting criteria, promotion thresholds and hiring decisions. The result is that research becomes an informal obligation. It is returned to between classes and tutorials, and carried into evenings, weekends and term breaks.

    This is where the reframing of “culture” now matters.

    Sustainability without supported labour

    In REF 2021, the environment element required institutions to demonstrate the “vitality and sustainability” of their research environments. Guidance defined this in terms of research strategy, doctoral pipelines, research income, mentoring structures for early-career researchers and the capacity to continue producing high-scoring outputs. In arts, humanities and social sciences units in particular, panels praised institutions that could demonstrate early-career development pathways, including reduced teaching loads, research leave and internal funding.

    SPRE retains the same two criteria of vitality and sustainability. In REF 2029, these will now be assessed through both an institution-level statement, weighted at 60 per cent of the SPRE score, and a unit-level statement at 40 per cent. The institution-level statement places explicit emphasis on strategy as the main way in which research environments and cultures are now explained.

    This version of sustainability rests on the assumption that research labour is formally recognised and resourced. It does not capture the volume of research produced under contracts where research does not appear in workload models or time allocation at all. In practice, sustainability comes to mean whether outputs keep appearing, rather than whether the people producing them can realistically go on working like this when their next job may depend on it.

    The limits of research expectation

    It is true that REF 2029 introduces a substantive-link rule and allows outputs from staff on part-time or non-standard contracts, so long as they meet the 0.2 FTE, 12-month employment and research-expectation threshold. This complicates any straightforward claim that REF excludes precarious researchers. It also places the power of recognition firmly at institutional level.

    REF 2029 requires that a contract include a “research expectation,” while the guidance does not require institutions to prove that time, funding or workload adjustment were provided to support the research. The term “research expectation” itself remains vague, and in practice it may amount to little more than a nominal clause. That ambiguity allows outputs to be counted even when the labour behind them was carried out under precarious, unsustainable conditions.

    Culture was never going to be a perfect remedy. As Lizzie Gadd has already argued in her “my culture is better than yours” critique of competitive approaches to research culture, the sector’s engagement with culture has been uneven and often reflects the priorities of research-intensive, or more accurately funding-intensive, institutions and STEM disciplines. Even so, culture was the one part of the framework with the reach to ask how research expectations attach themselves to people, workloads and contracts. Political? Maybe. But what about precarity isn’t political.

    What still counts

    All of this is unfolding in the context of a wider financial crisis across higher education. Falling international recruitment, rising costs and long-term funding pressure have placed many providers under severe strain, with arts, humanities and social science provision often among the most exposed. In this environment, universities trade on the career aspirations of early-career academics to manage costs, relying on their, our, my hopes of progression to sustain teaching at lower pay and with fewer protections.

    We now have a sector full of strategies, including ever more detailed strategies for people and research environments, and very little shared vision of what a sustainable early-career academic life should look like. With REF 2029 restoring the dominance of outputs and re-casting culture as a subsidiary part of institutional strategy, a clear message is taking shape. Outputs still count. The conditions under which those outputs are produced count for far less.

    Source link

  • Everything you need to know about REF 2029

    Everything you need to know about REF 2029

    REF 2029 has been unpaused and with it will undoubtedly come a whole new wave of disagreement and debate. Much like the research ecosystem itself it is an unpredictable beast forever buffeted by its participants, leaders, and funders.

    To get immediately to the headlines. People, Culture, and Environment (PCE) has been relabelled as Strategy, People and Research Environment (SRPE). The weighting for the new element is 20 per cent of the total dropped from the 25 per cent weighting originally allocated to PCE. Contribution to Knowledge and Understanding (CKU) (the output one), has been boosted to 55 per cent, and Engagement and Impact (E&I) (the impact case study one) has remained at 25 per cent.

    There is a significant attempt to reduce the burden of the exercise through reverting to some of the narrative practices of REF 2021 in research environments while reducing the need for new data to be collected. E&I has remained pretty much the same and there are some concessions on portability that will only partially assuage the concerns of people concerned by this sort of thing.

    So: a bit more for the things that researchers produce and a bit less for how they produce them.

    Strategy people research and environment

    The big frame for REF 2029 has been that research is a team sport. This is why Research England and its devolved counterparts have sought to remove the relationship between researchers and research outputs. However, this led to a forever debate on who actually produces research between institutions or researches and whose work should be measured. In effect, should an average researcher be boosted by an exceptional research environment or should an exceptional researcher be held back by an average research environment.

    SPRE asks institutions and units to demonstrate how their strategies contribute toward the development of people and good research environments. This will be done primarily through narrative with metrics to support. There is flexibility in how providers may demonstrate their work in this area but the core idea is that the work should be accompanied by a clear strategic intent.

    The actual basket of work that can fit under SPRE is varied and might include evidence of improving research cultures, new partnerships, collaborations, the development of new policies, and a range of evidence of improving culture metrics. The major change from what had been proposed is the underpinning focus on strategy and by extension the broader range of activity providers are likely to submit. Culture is still very much there but it is part of a range of activity.

    SPRE will be assessed at both an institution and unit level. The assessment will be through a statement similar to the unit level statements from the environment element in the 2021 exercise. However, the institution level score will make up 60 per cent of the overall score for each Unit of Assessment (UoA) and documentation linked to the UoA itself will make up the remaining 40 per cent of the overall score. In effect, this means that the research infrastructure of the institution will have a greater impact than the research infrastructure of the unit where research is actually produced.

    The changes to SPRE have partially emerged from the PCE pilots. Their conclusion was that it would be possible to assess PCE, but that the approach would need some adaptations for a full scale exercise. Some of the challenges included: the phenomenon of larger institutions scoring better purely because they had access to more evidence, the need for simple and timely data collection, and a need for clearer guidance and simpler processes. In short, it is technically possible to measure PCE in a robust way but it is hard to implement – which was a view shared by many at the start of the exercise.

    Measures

    The argument in favour of the 60:40 split is that it incentivises providers to improve their research environments across the whole institution. In what will be partially good news to the minister there is also a renewed focus on rewarding providers that are focussed on aligning their activities with their strategic intent in people, research, and environments.

    While we do not yet have all of the criteria, the submission burden seems to be lower than many feared. As well as the statements at a unit and institution level there will be a data requirement at an institutional level which it is anticipated may include: which units are submitted, volume, research doctoral degrees awards, and annual research income by source.

    At a unit level there are similar set of measures with some nuance. In the initial set of decisions it was proposed then PCE now SPRE could include

    […]EDI data (that are already collected via the HESA staff record), quantitative or qualitative information on the career progression and paths of current and former research staff, outcomes of staff surveys, data around open research practices, and qualitative information on approaches to improve research robustness and reproducibility.

    There are criteria yet to be published but it is suggested that issues of equality will be looked at primarily through the statements, and through calibration with the People and Diversity Advisory Panel and the Research Diversity Advisory Panel during the panel assessment stage. The data burden will be less and ideally not newly collected.

    CKU OK

    SPRE will also now be the place where institutions submit context, structure, and strategy, about their units. Disciplinary statements have been removed entirely from Contribution to Knowledge and Understanding (CKU) and Engagement and Impact (E&I). This might look like a rearranging of the same information but it also impacts the overall weightings.

    In the previous model CKU accounted for 50 per cent overall including outputs and the statement. In effect, CKU now accounts for 55 per cent of the weighting while focusing only on outputs.

    REF is now an exercise which is still majority related to the perceived quality of research outputs. There is now an upper limit on individual submissions of five per unit unless there is an explanation why it exceeds this. However, there is no requirement that every researcher submits (the decoupling process). Providers will have to produce a statement on how their submissions are representative through and each unit will be expected to provide an overview of their work and a statement of representation.

    On the other big debate the portability rules have remained broadly the same. To recap, in REF 2014 the whole output was captured by whichever institution a researcher was at, at the REF census date. In REF 2021 if a researcher moved between institutions the output was captured by both. In REF 2029 the initial proposal was that the output will be captured by the institution where there is a “substantive link.” Research England has made a slight concession and will allow long-form submissions to be portable for a five-year period with sufficient justification.

    What remains unresolved

    There is a political element to all of this, of course. In the post-16 white paper it was made explicit that

    We anticipate that institutions will be recognised and rewarded, including through the Research Excellence Framework (REF) and Quality Related funding, for demonstrating clarity of purpose, demonstrating alignment with government priorities, and for measurable impact, where appropriate. While government will continue to invest across the full spectrum of research, we expect universities to be explicit about their contributions and to use this framework to guide strategic decisions.

    REF 2029, as currently set out, does not do this. Unless there are further announcements on the relationship between REF and funding, REF will do a different version of what it has always done. It assesses the research that is put in front of it. There is no additional weighting for alignment to government priorities, there are no changes to impact measurements, and while there is a focus on alignment between activity and strategic intent it is up for institutions to define what that strategic intent is. There have been efforts to reduce the burden from the initial decisions but this does not seem to be a significantly less burdensome activity than REF 2021.

    The minister might be pleased that the word “strategy” has replaced “culture”, and with some fiddling with weighting, but the direction of travel across the whole exercise has remained broadly intact. It is not quite the cultural revolution that was promised nor is it an output focussed exercise that some wanted. It’s a bit of a compromise but only a little bit.

    What now

    The response of the sector will largely determine whether these changes are viewed as a success. Ultimately, REF is a political project. It is not simply an input into the dispassionate allocation of public money but requires decisions on what is valued. There is a version of the REF which is only about research outputs. There is a possible version which is only about research environments and there are hundreds of weighting, criteria, frameworks, rules, and regulations in between.

    The reasonable criticism of Research England is that it made radical changes to REF 2021 and could not bring the sector with it for REF 2029. At times, it felt like the public explanation was about how a series of technical changes to the exercise achieved a set of good outcomes for the sector without vigorously explaining what good was, who would lose out, and why the trade offs were worth it.

    These new decisions are either a messy middle ground or a genius compromise. They cede ground to those concerned about outputs by changing weightings and moving criteria but it maintains culture as a key focus. They provide room to include more culture focussed statements without complex metrics. And they are politically astute enough to talk about strategy, even if the strategy isn’t the same as the government’s in every institution.

    The worst possible result would be the ongoing argument between providers, between providers and funders, and between funders and government. The unedifying spectacle of a noisy debate on why elements of the sector’s own research exercise is not fit for purpose distracts from both the enormous administrative burden of the exercise and the political case for why the sector should command significant research funding.

    Source link