In a recent Wonkhe blog, Joe Mintz discussed the challenges of policy impact in social sciences and humanities research.
He highlighted the growing importance of research impact for government (and therefore institutions) but noted significant barriers. These included a disconnect where academics prioritise research quality over early policy engagement and a mutual mistrust that limits research influence on decision-making.
We have recently published a book on research impact that endeavours to explore the challenges of research impact, and these views chime greatly with us. Our motivation for starting the book project were personal. We have carried out a great deal of impactful research and provided support and training for others wishing to engage in impact. However, we wondered why impact seems to be so poorly understood across the sector, and, we had observed, there was a clear fracture between those who wanted to do impactful research, and institutions who wanted to control the process while not really understanding it.
Agenda opposition
There continues to be understandable opposition by some to what has been referred to as “the impact agenda”. One criticism is that impact is something being imposed by government and management that is at odds with the ideology of research. This argument follows that research impact is a market-driven mechanism that pressures academics to demonstrate immediate societal benefits from their research, often at the expense of intellectual freedom and critical inquiry. And a metric driven measurement of research impact may not fully capture the complexity or long-term value of research.
We can certainly empathise with this perspective, but might suggest that this is, in a large part, due to how impact has manifested in a sector that does not really understand what it can be. In our own experiences we have experienced management “advice” that firstly says do not waste your time doing impact, then, once performance-based research funding becomes attached to it, being told it is very important and your impact needs to be “four star”. And then indifference is replaced with interference and attempts to control, to make sure we’re doing it “properly” and making sure it can be monitored.
In trying to develop our understanding, we spoke to 25 “impactful” academics, who had objectively demonstrated that their research has high value impact, and a range of research professionals across the sector. It soon became very clear that our own observations were not outliers for those doing impactful research.
Impact success for those we spoke to came from a personal belief that saw it ingrained in their own research practice – this was something they did because they felt it was important, not because they had been told to. The stakeholders and networks they had, and often spent considerable time building, were their own not their institution’s, and many protected these contacts and networks from institutional interference.
In most cases, interview subjects said that there was little support from their institutions, they just did the work because they felt it was an important part of their research, and this symbiotic work with stakeholders provided further research opportunities. They could see the value of doing impactful research and felt personally rewarded as a result.
And many talked of institutional interference, where there was opposition to what they were doing (“you’re not doing impact properly”) and advised from positions of seniority although perhaps not knowledge or, in some cases integrity. They were instructed to do things more in line with university systems, regardless of how poor they might be. There was a clear dissonance between academic identity and management culture, often informed by an “impact industry” where PowerPoints from webinars are disseminated across institutions with little opportunity for deep knowledge becoming embedded.
Secret sauce
And many spoke of the research management machine, insisting that they engage with central systems so their work could be “monitored” and having many people around them telling them what to do, but offering no support. This support was often as basic as “do more impact” and “give us the evidence now”. In some cases, threats were made to not submit their case studies should they not follow the “correct process”, even when their work was clearly highly impactful. An odd flex for a senior leader, given QR funding goes to the institution, not the academic.
While the research that went into this book probably threw up as many questions as answers, one thing was very clear for this work. If it is to be successful, impact cannot be imposed upon academics or centrally controlled, it must originate from the academic’s community and own identity as a researcher. Telling someone to “do some impact because we need another case study” with a year before a REF submission is not good practice; management needs to take time to understand the research academics are doing and explore together how best to support it.
We are reminded of a comment from one interviewee, who does incredibly interesting and impactful research, and has done for many years. When asked why they do it, they simply said “because I enjoy it and I’m good at it”.
High quality impact case studies come from high quality research and high-quality impact. This is not something that can be gamed or systematised. Academics need to own impact for it to be successful, and institutions need to respect this.