Time to move beyond ‘good rules for bad people’

Image: A stamp printed in Greece circa 1983 shows Odysseus and Sirens by Lefteris Papaulakis / Shutterstock.com

By Martin Lodge – London School of Economics and Political Science

The contribution by Hirshleifer and Teoh directs attention to an often neglected side of the study and practice of behavioural insights: the behavioural biases of the decision-makers, whether they are political or bureaucratic. Not dissimilar to Thaler and colleagues in one of the earlier pieces that set the agenda for the ‘Nudge’ movement (Jolls et al., 1998), Hirshleifer and Teoh describe a number of mechanisms and phenomena that have been widely associated with distortions in policy responses. It is important to move away from a sole interest, as Hirshleifer and Teoh put it, in ‘good rules for bad people’, but also to consider the circumstances in which ‘bad rules’ might emerge. Hirshleifer and Teoh offer such an account in the area of financial regulation and accounting.

It is indeed very puzzling why the behavioural policy movement that purports to be interested in the consequences of bounded rationality has invested so much in studying individual transactional choices by consumers, but has cared so little about the behavioural biases of organisations themselves (Lodge and Wegrich, 2016). The fashionable turn towards experiments and randomised controlled trials has only furthered this bias in research attention. This one-sided focus is even more puzzling in that there are a number of direct lines of research that speak directly to the kinds of concerns that Hirshleifer and Teoh highlight and that are at the heart of the long-standing tradition of research in bounded rationality and decision-making.

One such literature is concerned with organisational decision-making under conditions of ambiguity. Herbert Simon saw organisations not as flawed, but as coping with competing pressures (Simon, 1997). Merton (1936) illustrated the basic mechanisms that lead to unintended consequences. On the same lines, the ‘garbage can decision-model’ (Cohen, 1972) painted a world in which solutions were hunting for problems, participation in decision-making meetings was fluid, and where ‘technologies’ (both cause-effect assumptions and assumed effects of interventions) were uncertain. Organisations are biased in their information-processing capacity, as individuals seek confirmation of their favoured solutions and explain away ‘outliers’; a problem that exists both in the context of a very small number of incidents (March et al., 1991) as well as ‘big data’ where the complexity of the signals drowns out specific information (Griffiths et al., 2016).

Another literature is interested in ‘responsive publics’. Hirshleifer and Teoh paint the familiar world in which ‘moral panics’ (Cohen, 1972) lead to the conditions for ‘Pavlovian politics’ (Hood and Lodge, 2005): high salience issues place politicians under the pressure of media and other attention, leading to policy responses. The risk perception literature too has highlighted that certain risks incur more heightened dread than others. But it is not just ‘uninformed publics’ that distort policy responses; especially the world of financial regulation might be said to have witnessed ‘strangulation’ by powerful interests – both private and public (Carpenter, 2010). ‘Bad rules’ are not just about ‘bad publics’ and ‘bad attentive politicians’, they are about the resources allocated to interests in different political systems. Indeed, ‘bad rules’ might be due to shared ideological and academic outlooks that affect regulators, industry participants and academics in an effect termed ‘conceptual capture’ by Carpenter and Moss (2013).

Rather than condemning ways in which organisations make decisions and respond to ‘irrational’ publics (as Hirshleifer and Teoh do, in line with the wider literature), this line of inquiry should give rise to an interest in the ‘acceptance factor’: why it is that certain proposals become more popular than others, given that we know that most policy responses are reactive to existing approaches. We should accept that when it comes to policy (and regulation), the world is full of pendulum swings or even ‘hunting around’ rather than stability.

The political world that Hirshleifer and Teoh paint is a familiar one to those in the public choice/political economy tradition. They advocate extra-large majorities and other constitutional devices to muzzle responsive instincts of political and regulatory systems. There are at least three responses to this. One is that such ‘choice architectures’ have their own biases – and these might not be very efficient either, especially as environmental conditions change. Fritz Scharpf (1988), in his work on German federalism, has shown how extra-large majority decision-rules lead to ‘decision-making traps’ as potential losers hold out for side-payments. The second is that constitutional rules have been shown to be hardly resistant when faced by determined political will: in view of concerns about banking sectors crashing, it is unlikely that special decision-rules will be seen as particularly legitimate if they can be associated with the collapse of banks or the closure of ATM machines. The tempting song by sirens, in other words, is too powerful for a modern-day Odysseus to resist even when tightly tied to the mast.

Third, devices to bring ‘rationality’ into decision-making processes are themselves exposed to organisational biases: programme budgeting collapsed within years, cost-benefit analysis and regulatory impact assessments are usually nothing more than fig leaves for politicians’ manifesto commitments that have attracted a booming industry of consultants without necessarily enhancing regulatory quality, and sunset provisions have generated the opportunities for more capture by special concentrated interests rather than less (as public interests fade away over time). Over 25 years ago, Stephen Breyer (1995) similarly bemoaned irrationality in risk regulation and advocated a ‘mega-regulator’ to enhance consistency in approaches. In some cases, such a challenge function in the world of regulation might be useful, but such institutional sticking-plaster avoids much wider questions about methodological and theoretical assumptions underlying models and research, and they seem to deny inherent value conflicts that exist about questions of risk and human life.

Finally, Hirshleifer and Teoh paint a world of heuristics and ‘system 1’ thinking in which human decision-making is inherently flawed and that such flaws require an Odysseus-like ‘tying to the mast’ in order to allow ‘optimal’ levels of regulation to persist. Politicians, regulators and ‘publics’ require help to save them from ‘bad rules’. This is an important perspective. We all want least effort for maximum impact. But is there really that much consensus about what an ‘optimal’ rule is? Every intervention has biases and apparent and unintended side-effects. Furthermore, there is also a more positive agenda, namely one that thinks more positively about ‘system 1’ thinking and the reliance on heuristics (for example, Gigerenzer, 2014).

The challenge for the behavioural policy movement therefore is twofold. One is to reconnect to its own intellectual legacy that has been interested in decision-making in organisations, the other is to be transparent about theoretical and methodological differences in understanding human decision-making. Such a change of tack would inevitably lead to more diverse policy advocacy – which might enhance wider debate and understanding about politics and policy.

Read the full article from Hirshleifer and Teoh How psychological bias shapes accounting and financial regulation for free in the first issue of Behavioural Public Policy here.


Breyer, S. (1995) Breaking the Vicious Cycle Cambridge MA: Harvard University Press.

Carpenter, D. (2010) ‘Institutional strangulations: bureaucratic politics and financial regulation in the Obama administrationPerspectives on Politics 8(3) 825-46.

Carpenter, D. and Moss, D. (2013) ‘Introduction’ in D. Carpenter and D. Moss (eds) Preventing Regulatory Capture Cambridge: Cambridge University Press.

Cohen, S. (1972) Folk Devils and Moral Panics London: MacGibbon and Kee.

Cohen, M.D., March, J. and Olsen, J. (1972) ‘A garbage can model of organizational choiceAdministrative Science Quarterly 17(1) 1-25.

Gigerenzer, G. (2014) Risk Savvy London: Penguin.

Jolls, C., Sunstein, C. and Thaler, R. (1998) ‘A behavioral approach to law and economicsStanford Law Review 50(5) 1471-1550.

Hood, C. and Lodge, M. (2005) ‘Pavlovian innovation, pet solutions and economizing on rationality?’ in J. Black, M. Lodge and M. Thatcher (eds) Regulatory Innovation Cheltenham: Edward Elgar.

Lodge, M. and Wegrich, K. (2016) ‘The rationality paradox of NudgeLaw & Policy 38(3) 250-67.

March, J., Sproull, L. and Tamuz, M. (1991) ‘Learning from samples of one or fewerOrganization Science 2(1) 1-13.

Merton, R. (1936) ‘The unanticipated consequences of purposive social actionAmerican Sociological Review 1(6) 894-904.

Scharpf, F. (1988) ‘The joint-decision trapPublic Administration 66(3) 239-78.

Simon, H. (1997/orig 1947) Administrative Behavior New York: Free Press (fourth edition).

Leave a Reply