Mario Herberz, Stephanie Mertens, Ulf J. J. Hahnel, & Tobias Brosch
It’s been over a decade since Richard Thaler and Cass Sunstein introduced the notion of choice architecture, referring to subtle changes in the decision environment that can influence behaviour while preserving individual freedom of choice. This idea has attracted considerable attention from researchers and practitioners alike, who have since been exploring the effects of choice architecture interventions in the form of framings, defaults, or reminders (to name but a few) across behavioural domains as diverse as health, environment, and finance. At the same time, policy makers across the globe have been using insights from research on choice architecture to improve public policy making.
In light of the increasing popularity of choice architecture, we felt that a summary of the current state of the literature in the form of a meta-analysis was needed. Systematically taking stock of the existing research is a way to provide practitioners with guidance for the application of choice architecture interventions and to identify gaps and challenges in the literature. With this in mind, we aimed to address the following questions:
- What does the evidence gathered over more than 10 years of research tell us about the effectiveness of choice architecture interventions?
- Are some types of interventions generally more effective than others?
- Are choice architecture interventions more effective in some behavioural domains than others?
Our analysis revealed a consistently positive effect of choice architecture interventions on behavior across varying types of interventions and behavioral domains. But we also observed substantial variation in the effectiveness of the different interventions. For instance, interventions that change the way decision contexts are structured (e.g., defaults) were generally more effective than interventions that target the way information is presented (e.g., framing) or that help decision makers follow through on their behavioural intentions (e.g., reminders). Moreover, interventions were generally more effective in the food domain than for other domains such as health behaviour or behaviour related to the environment.
These findings may guide practitioners who seek to encourage behaviour change through the use of choice architecture and who look for assistance in selecting an appropriate intervention. At the same time, our findings reveal substantial gaps in our understanding of the processes that drive the behavioural effects of choice architecture interventions. Even when accounting for differences across types of choice architecture techniques and behavioural domains, large parts of the heterogeneity among effect sizes remain unexplained. This means that we are still very much at the beginning of understanding when and why choice architecture interventions work. As previously argued by other authors, research on choice architecture needs to be able to account for the heterogeneity in intervention effects in order to have a real impact and to enable a thorough evaluation of heterogeneity at the meta-analytic level.
Another problematic issue that was highlighted by our analyses and by subsequent re-analyses of our data (Maier et al., 2022; Szaszi et al., 2022; Bakdash & Marusich, 2022) was the identification of a publication bias in the choice architecture literature. The distribution of effect sizes showed an overrepresentation of successful relative to unsuccessful implementations of choice architecture interventions, indicating that the average intervention effect computed from our data likely overestimates the actual effectiveness of interventions. Though not unique to the field of choice architecture, this publication bias is highly problematic for a field that has the potential to positively impact people’s lives through its implementation in public policy making. It is therefore important to address this publication bias in the choice architecture literature by more rigorous research practices such as pre-registrations and pre-registered reports as well as a more systematic publication of null findings. In this context, we would argue that a stronger focus on heterogeneity in intervention effects may help to provide a more fine-grained answer to the question when and where choice architecture interventions work by accounting for variations in effect sizes, including null effects or backfiring effects.
Now, what can we conclude from this mix of insights and challenges revealed by our meta-analysis?
In our view, our meta-analysis provides a useful overview of the current state of choice architecture research that, despite highlighting important gaps in the understanding of when and why choice architecture interventions work, may serve as a guide for practitioners and policy makers. Our findings may moreover be an adequate evidence base to evaluate the relative effectiveness of different choice architecture interventions, while the absolute effect sizes identified by our analysis should be interpreted with caution due to the publication bias affecting the overall effect sizes.
When aiming to change a specific behavior, it remains critically important to empirically test the effects of a choice architecture intervention in a given policy context using field trials (DellaVigna & Linos, 2022), for instance in the form of megastudies which allow for the integrated test of multiple choice architecture interventions developed by different teams of experts (Milkman et al., 2022).