The path from early promise to widespread impact requires one thing and one thing only: scalability – the capacity to grow and expand in a robust and sustainable way. Put simply: you can only change the world at scale.
To tackle inequality in higher education, we need scalable interventions. The interventions that make the biggest difference will be those that we can successfully expand from a small group to a much bigger one.
Across many policy areas, ideas that appear promising after being tested at a small scale often have a much lower impact when expanded. Existing evidence suggests the majority of interventions – somewhere in the range of 50% to 90% – will have weak effects when scaled. This is what the economist John List terms a ‘voltage drop’: ‘when an enterprising idea falls apart at scale and positive results fizzle’.
Interventions in higher education are frequently designed at either the module or school level, with the intention to eventually scale up. Often, interventions are started by a single enthusiastic practitioner, who then tries to scale up the intervention later on. For example, a student support programme may go from being implemented within the school of psychology to across the whole institution. Similarly, policymakers may seek to scale an idea that was successful at one institution by implementing it across a range of other institutions.
As a result, higher education emerges as a prime area where we should consider the intended scale of implementation from the outset. While many interventions struggle to scale, List argues this challenge is surmountable by building into our processes an understanding of five key factors that impede scaling.
1. False positives
The first major cause of voltage drops is the prevalence of false positives: concluding there is a significant effect when there is not. False positives can arise in a manner of ways, but we can split them into three categories: statistical error, human error, and fraud.
We can go a long way to addressing this trifecta of false positives by embracing the open science movement. Key tenets of this approach include pre-registration of trials, independent evaluation, and open publication of data and code. Opening our research up in this way not only helps to prevent fraud (more prevalent than we might think in academia) but also encourages more collaboration with peers and enables others to build on your work.
2. Know your intended audience
When testing your intervention, consider whether this initial group is representative of the broader population you hope to impact. If the intervention is not designed for only one group, we should not test it with only one group.
For example, say we trial an intervention with Engineering students before rolling it out across the institution. This could cause difficulties if Engineering students are different from the wider population we are interested in. It may be that the intervention only works on our sampled population (in this case Engineering students) and no longer works when we roll it out to the entire student population.
3. Spillovers
Interventions often give us evidence of what works at a small scale, but it is difficult to anticipate how this could change when an intervention becomes a large-scale movement.
This is particularly important when we look at scaling interventions from one institution to many. We should consider that the positive effects of an intervention at the institution level may disappear once the programme is scaled further. For example, consider a career guidance programme that improves graduate outcomes at an institution. When rolled out across the country, it may alter the dynamics of the graduate labour market in such a way that the original benefits are negated.
4. Is the success due to the practitioner, or the idea?
We should consider whether the intervention, as tested, accurately reflects the characteristics it will have when deployed widely.
The key analogy here is one of chefs and ingredients. If the reason behind a restaurant’s success is its ingredients, it will be more likely to scale well, as the ingredients can be scaled across many branches. But a restaurant will struggle to scale if its success is down to the unique magic of the chef.
Similarly, an intervention may fail to scale if we can mainly attribute its positive impact to a practitioner’s individual brilliance at a specialised skill: the talented practitioner cannot be so easily scaled.
5. Rising costs
If the costs grow disproportionately with the intervention, it will struggle to scale. For example, at a small scale, it may be relatively easy to find an effective practitioner who can deliver the intervention as it was intended and have a high impact on students.
But, as we’ve seen, if the success of a programme rests on the talent of practitioners, this is unlikely to scale well. As the intervention scales and hires more staff, finding staff who can have the desired impact will become increasingly difficult and expensive.
Moving towards having an impact at scale
It is a worthwhile pursuit to make incremental but meaningful changes that improve the lives of students. Many practitioners, not to mention students themselves, will be able to attest to the difference a small-scale intervention can have on a student’s life, helping to break down barriers, narrow gaps and open up doors.
But to move the dial on inequality in higher education, we should build considerations around scaling into our interventions. In doing so, we can move our focus towards building an evidence base that helps us make a much larger change. By making this move, we can realise List’s powerful assertion: ‘you can only change the world at scale’.