Doing Behavioural Science More Collaboratively: Lessons from a Field Experiment

What happens when policy interventions are rolled out without engaging with those affected the most? Solutions that look good on paper but fail in practice. In this blog post, Henrico van Roekel argues that collaboration is not just an ethical ideal, but a practical necessity for creating interventions that work in the real world.

A stylized envelope icon with motion lines, representing fast email or message delivery.

The Greek myth of Icarus is often interpreted as a warning against recklessness: his wax wings melt when he flies too close to the sun. But the deeper lesson may lie with Daedalus, his father. Daedalus designed the wings and gave instructions without considering Icarus’s perspective or involving him in the process. What if he had asked his son for input? Together, they might have devised a better strategy, like wearing warmer clothing to signal when he was flying too high, making the risk tangible and actionable.

This oversight mirrors a common flaw in behavioural science: experts often design interventions without much input from the people who are affected the most: those whose behaviour is being targeted, those whose compliance is essential, and those who stand to gain or lose from the outcomes of the intervention. The result? Policies that fail in practice despite the seemingly flawless logic.

In a recent study that found Nudges Can Be Both Autonomy-Preserving and Effective (with Laura Giurge, Carina Schott and Lars Tummers), we tackled email overload in an elderly care organization. Designing interventions without understanding employees’ experiences of this challenge risked not only ineffectiveness but also resistance. So, we made collaboration the cornerstone of our approach.

The problem: email overload

Email overload is a pervasive issue in many work environments. While email promises flexibility, it often becomes a non-stop source of stress. Employees experience fragmented attention during work and struggle to disconnect when emails keep coming in during leisure time. This was particularly true in the elderly care organization we studied, where staff were overwhelmed by constant communication demands.

Designing interventions to reduce email overload would clearly be useful, but such interventions could easily be ineffective or even backfire if they did not take into account employees’ perspectives on the problem. We took care to minimize that risk.

Step 1: Co-designing interventions

We began by working closely with an expert team within the organization, consisting of managers and HR employees. This team was key in focusing the study and analyzing the behavioural problem at hand. We conducted 11 interviews to further develop the interventions. Taking on board diverse perspectives allowed us to design interventions that best targeted the problem.

We developed three types of interventions:

1. Opinion Leader Nudge: We noted that the environment was quite hierarchical and that people valued the opinions of superiors. This intervention described that the HR manager would email less, assuming that this would convince employees to follow suit.

2. Rule-of-Thumb: We observed that much of the work in the organization was protocolized, but these protocols were often lengthy and complicated. Instead, a rule-of-thumb is a highly simplified protocol, an easy-to-follow guideline that works in most situations. This intervention described that to know whether email was the appropriate communication tool, only one question was relevant: “How long can you wait for the reply?” It then suggested different tools depending on the desired waiting time.

3. Self-Nudges: We recognized that employees preferred to influence their own behaviors next to having management do so. We therefore developed self-nudges that employees could use for specific challenges. For example, we suggested that if they experienced insecurity about whether to reply or not, they could help their colleagues reduce that insecurity by always indicating whether a response was needed.

After months of developing these interventions, we piloted them in an online panel to gauge their effectiveness and perceived autonomy. The results were promising, so we moved forward to test them further.

Step 2: The large-scale survey experiment

Next, we tested the interventions in a sample of healthcare employees. To provide a comprehensive comparison, we included traditional solutions to email overload: restricting email access to two hours a day, offering monetary rewards for emailing less, and providing public praise for reducing email use.

Our interventions were consistently rated as more autonomy-preserving and effective than these traditional approaches, and elicited higher compliance estimates (see Figure 1). This highlighted the importance of designing interventions that align with employees’ preferences.

Figure 1: Co-designed nudges were perceived to be more effective than traditional interventions on email overload

Bar graph comparing compliance scores in percentage for various interventions including public praise, email access limit, monetary reward, rule-of-thumb, self-nudges, opinion leader nudge, and combinations of nudges. The graph displays self-admission rates and corrected estimates for each intervention.

Source: van Roekel et al (2026).

Step 3: The quasi-field experiment

Finally, we implemented the interventions in the organization over eight weeks. We measured email use throughout this period. While we observed a general decrease in email use, the specific results were mixed, a common challenge in field experiments. Nevertheless, the process itself provided valuable insights into designing interventions that resonate in practice.

Having finished the experiment, we took our time discussing the results with both management and employees, presenting both the results as well as speaking about the experience of participating in an experiment.

Key lessons for Behavioural Science

Our study offers several key lessons for designing effective behavioural interventions. Here we mention a few:

1. Listen to Practitioners: Scholars often focus on outcomes they deem theoretically relevant or impactful, but practitioners’ struggles are often more immediate and practical. In our case, employees identified excessive emailing, after-hours phone use, and overtime as their top concerns. When selecting behaviors to target, practitioners’ voices are essential.

2. Collaborate on Analysis: Scholars bring systematic methods and theoretical frameworks, but practitioners provide the contextual understanding that makes these methods meaningful. This collaboration involves not only interviewing and observing practitioners but also reflecting on the analysis’ conclusions together, ensuring insights are relevant and actionable at all organizational levels.

3. Adapt Experimental Designs: Compromises, like quasi-experiments, can still provide actionable insights while maintaining scientific integrity. In our case, randomization was not deemed acceptable or feasible in the organization, so we opted for a quasi-experiment. Even if not perfect, field studies are highly useful in assessing real-world effects and, perhaps as importantly, function as a form of action research whereby researchers and practitioners tackle challenges together, both in identifying problems and potential solutions, and in gathering data.

4. Extend Impact Beyond the Study: Organizations can use their experience in the study to position themselves as thought leaders within their industry. Sharing findings externally (through professional publications, conference presentations, or media) amplifies the impact of the project and fosters broader engagement beyond academia. After we finished our study, for example, the HR director’s perspective on the project was featured in a national newspaper, allowing us to shine light on the behavioural problem of email overload to a large audience.

The story of Icarus teaches us that even the best-designed solutions can fail without the right perspective. In behavioural science, this means involving stakeholders not just as subjects, but as partners in the process. Our study demonstrates that when interventions are co-created with those affected, they are more likely to be accepted. While direct effectiveness may vary, this can lead to sustained attention for the behavioural problem at hand. The future of behavioural science lies in collaboration.

You can read the full article here.

Henrico van Roekel is an Assistant Professor at the Utrecht University School of Governance (USG), specialized in applying behavioral science in the public sector, leadership, and well-being at work.