Research quality is the dark matter of the university sector. It is hard enough to assess research after it has been done, research funders must find some way to evaluate proposals for projects which don’t exist yet. The established model for this is external expert review, combined with a panel stage where proposals, and their reviews are discussed, and hard choices made.
UK researchers will be familiar with this via our own UKRI, and everyone who has had a funding application rejected will recognise that the reviews received may be partial or mis-directed. This speaks to the idiosyncrasy and variability in individual judgments of what makes a good project, and has downstream consequences for what ultimately gets funded.
Research from the Dutch research council published last year showed what everyone suspected – two panels making the decision about the same proposals would end up funding different projects. The results were better than complete random selection, but not by much.
The capriciousness in funding awards has even led some to propose selecting by lottery among proposals judged to be eligible – a procedure known as partial randomisation and currently being trialled by a number of funders, including the British Academy.
Pressure
Issues with grant review aren’t limited to variability between individual reviewers. The pressure on researchers to win funding is driving an increased number of applications, at the same time as funders report it being harder and harder to identify and recruit reviewers. One major UK funder privately reports that they have to send around 10 invitations to obtain one review. Once received, quality of reviews can be variable. Ideally the reviewer is both disinterested and expert in the topic of the proposal (two factors which are inherently in tension), but scarcity of reviewers often leaves funders forced to rely on a minority of willing reviewers. At the same time many researchers are submitting applications for funding without reciprocating by providing reviews. These same issues of peer review are similar to those that beset journal publishing, but in research funding the individual outcomes are far more consequential for careers (and budgets).
A model of funding evaluation which promises to address at least some of these issues is distributed peer review (DPR). Under DPR, applicants for a funding scheme review each other’s proposals. It’s an idea that originated in the astronomy community, where proposals are evaluated to allocate scarce telescope time (rather than scarce funding), and it is also common for conference papers, particularly in computer science, but the application to evaluating proposals for funding is still in its infancy.
At the Research on Research Institute (RoRI) we have a mission to support funders to become more experimental in their approach – to both use strong evidence on what can work in the funding system, but also to run experiments to generate that evidence themselves. A core member of the international consortium of 19 funders which funds RoRI is the Volkswagen Foundation, a private German funder (and completely independent of the car manufacturer).
When they decided to trial distributed peer review, running a parallel comparison of DPR and their standard process of external review and decision by an expert panel, we were able to partner with them to provide independent scientific support for the experiment. The result is a side by side comparison of how the two processes unfolded, how long they took, how they were experienced by applicants and which proposals got funded.
Positive expectations
Our analysis showed that before they took part, applicants mostly had positive expectations of the process. Each proposal was assessed by both methods, and eligible to be funded if selected by either method. When the results came in, we saw some overlap between the proposals funded under DPR and by the standard panel processes. The greater number of reviews per proposal also allowed the foundation to give considerable feedback to applicants, and allowed us greater statistical insight into proposal scoring. Our analysis showed that no number of reviews would make the DPR process completely consistent (meaning we should expect different proposals to be funded if it was run again, or if it was compared to the panel process). Many applicants enjoyed the insight reviewing other proposals gave them into the funding processes, and appreciated the feedback they got (although, as you would expect this was not universal, and applicants who ended up being awarded funding were happier with the process than those who weren’t). From the foundation’s perspective it seems DPR is feasible to run, and – if run without the parallel panel stage – would allow a large reduction in the time between the application deadline and the funding award.
It’s an incredibly rich data set, and we are delighted the foundation has committed to running – and evaluating – the DPR process over a second round. This will allow us to compare across different rounds, as well between the evaluation by DPR and by the panel process.
DPR represents an innovation for funding evaluation, but one that builds on the fundamental principle of peer review by researchers. The innovation is to move funding evaluation in a more democratic direction, away from the ‘gatekeeping’ model of review by a small number of senior researchers who are privileged to sit on funder’s review panels. It ensures an equal distribution of reviewing work – everyone who applies has to review, and as a consequence widens and diversifies the pool of people who are reviewing funding applications. The Foundation’s experience shows that DPR can be deployed by a funder, and the risks and complaints – of unfair reviews, unfair scoring behaviour and extra work required of applicants – managed.
Flaws and comparisons
Ultimately, the judgement of DPR must be on how it performs against other funding evaluation processes, not on whether it is free of potential flaws. There definitely are issues with DPR, which we have tried to make clear in our short guide for funders who are interested in adopting the procedure. These include if, and how, DPR can be applied to calls of different sizes and if proposals require specialist review which is beyond the expertise of the cohort applying. A benefit of DPR is that it scales naturally (when there are more applications there are, by definition, more available applicant-reviewers). The issue of how appropriate DPR is for schemes where proposals cover very different topics is a more pressing one. It may not be right for all schemes, but DPR is a promising tool in the funding evaluation toolkit.

