Wing Hsieh, Deloitte Access Economics, Sydney
Behavioural public policy, like many social science disciplines, focuses heavily on what works – but we can’t assume good evidence translates into good impact. One prominent example where evidence of what works has not translated into wide-reaching societal change is prejudice reduction. After half a century of prejudice reduction research and many studies of effective interventions (Hsieh et al (2022); Paluck et al (2021), one could hardly say that we’ve solved this pertinent social issue. My research into prejudice reduction interventions finds that there are relatively few interventions that have been tested in the real world (so we can’t be sure if the solution will work in the real world) and there is virtually no consideration of whether interventions have scaling potential (so a lot of the solutions might not scale to match the span of the problem). Perhaps the gap between evidence and tangible impact in the real world is the need to better understand how interventions work in real world settings and how we can scale those interventions that are effective in real world settings – in other words, we need to think systematically about implementation.
This challenge faced by behavioural public policy in maximising the impact of intervention evidence in the real world is reminiscent of that experienced in the health sciences. As far back as the 1940s, health scientists were aware that evidence of what worked did not always translate into broader clinical outcomes. In response, the health sciences have devoted significant research effort towards understanding how to systematically increase the take-up of evidence into practice by practitioners and policymakers, so much so that such research has become a distinct science – “implementation science”.
Implementation science has featured studies exploring aspects of implementation and the development of supporting frameworks. For example, studies focusing on the effective scaling of health interventions have highlighted the need for adaptation to the local context to avoid a reduction in impact in new contexts (Bauman et al (1991); Chamber et al (2013); Hawe et al (2004)). Building on this, are the frameworks that have been developed in the health sciences to guide in-depth exploration of context when implementing and scaling complex health interventions. In the health sciences, this focus on implementation and the development of implementation science has helped individual programs and interventions to reach long-term sustainability in complex adaptive environments and to identify key mechanisms for effective real-world change.
So bringing in some of this implementation science thinking into prejudice reduction, I developed a checklist of what matters for the scalability of prejudice reduction interventions based on the expertise of 16 prejudice reduction experts. This checklist can be used to inform the design of new interventions so that it has the best possibility of scaling or it could be used to assess the scalability of existing interventions.
Consideration should be given to the design of the intervention, the costs of the intervention and further roll out, delivery constraints and the context in which the intervention was developed, tested or rolled out. The table of factors below outlines the themes and sub-themes that sit underneath each of these consideration categories.
There are other examples of scaling and implementation research in the broader social sciences, although it is still very much in its infancy. At the recent inaugural International Behavioural Public Policy Conference, we heard about the research of John List and his collaborators on “the science of using science”. They use an economic model to understand the pitfalls of scaling evidence in the real world. A team of researchers at BehaviourWorks Australia and the Victorian Government Behavioural Insights Unit have developed an evidence informed toolkit to help researchers and practitioners with designing interventions with the best possibility of scaling.
However, there is much more to be explored if we want to arm researchers and practitioners with the right tools for taking effective BPP interventions to having impact in the real world. The special issue of BPP on Field Experiments and Public Policy uncovered many avenues for further research. This includes investigating how the impact of interventions vary from cohort to cohort, the variability and complexity with encouraging behaviour change in the real world, and how scaling can vary across different subject matter and disciplines.
Without deliberate investigation of the methods required to systematically improve the take-up of evidence in real-world settings, at scale, grounded in an understanding of scalability, the impact of behavioural public policy to shift behaviours where it truly matters is severely curtailed.
An implementation science for behavioural public policy is crucial if, as researchers, we want to effect real change.
This research was completed as part of a PhD undertaken at Monash University, supported by the Australian Government Research Training Program and Australia Post.