Algorithms and Autonomy: Regulating Recommender Systems in the Age of Hyper-Nudging

A cartoon depicting an older woman shopping in a grocery store, looking at a shelf displaying items with labels 'CUSTOMERS WHO BOUGHT THIS ITEM:' followed by a list of products including bananas, carrots, and a loaf of bread.

Image source: Prateek Kapur

Online algorithms are increasingly integral to modern life. They assist us in our decisions and help us interact with each other. A particularly pervasive manifestation of algorithmic assistance is found in recommender systems – tools designed to personalize and prioritize digital content by suggesting items (e.g. products, ads, movies, songs, posts, news) for online users to engage with. Recommendations are based on our past behavior and our similarities to other users. In this blog, Malte Dold argues that while recommender systems can help users, by reducing complexity and search costs, they also target and exploit behavioral biases and pose potentially novel threats to our autonomy. Recommender systems are a domain of our attention economy that lend themselves to regulation through budges.

What are Budges?

Budges, represent a regulatory approach that targets producers rather than consumers (Oliver, 2013; 2015). They are specifically designed to regulate producers who undermine fair exchange by exploiting market imperfections (e.g., information asymmetries), using behaviorally informed tools (e.g., product placement, decoy effects), and imposing undue harm on others (e.g., overconsumption). The harm imposed on others must exceed a certain threshold of unacceptability, such as severe infringements on autonomy. Moreover, any budge regulatory intervention must also consider system-level effects, including the costs of regulation, impacts on allocative efficiency, and potential distributive concerns.

Understanding Recommender Systems

Recommender systems are a pervasive part of our daily user experience online. They can be found in e-Commerce (Amazon, Google, etc), streaming services (Spotify, Netflix, etc), social media (Facebook, Instagram, TikTok, etc), or on news sites (Apple News+, Google News etc). In all these domains, they personalize and filter content dynamically based on real-time user data to maximize user engagement.

Recommender systems broadly fit into two categories:

  • Content-based filtering: Suggests items based on similarity to past choices (perhaps you consumed a certain news article, now the system recommends you similar articles).
  • Collaborative filtering: Suggests items based on the behavior of similar users (say you consumed some news articles, another user consumed the same articles, now the system recommends both of you similar articles).

Figure 1: How recommender systems filter for consumers

A diagram illustrating two types of filtering in recommender systems: Content-Based Filtering and Collaborative Filtering. The Content-Based Filtering side shows a user reading articles and receiving recommendations based on similar content. The Collaborative Filtering side depicts two users reading articles and suggesting content to each other based on shared interests.

Source: Dold, 2025

So far, so good. If this were the end of the story, there would be little reason to call for regulation. One could argue that these systems simply filter and rank items and help reduce information overload.

So, wherein lies the problem? Jeff Hammerbacher, Facebook’s first research scientist, alludes to the problem: “The best minds of my generation are thinking about how to make people click ads . . . and it sucks”; and former Netflix CEO Reed Hastings makes a similar point: “At Netflix, we are competing for our customers’ time, so our competitors include Snapchat, YouTube, sleep, etc. (both quotes are from Williams 2018). In other words, the aim of recommender systems on Facebook or Netflix is not necessarily to satisfy users’ preferences, but rather to maximize engagement by capturing their attention, time, and wallets. 

To maximize engagement, recommender systems employ what Yeung (2017) calls hyper-nudges: highly personalized and dynamic interventions designed to influence behavior by subtly shaping digital choice environments. Unlike earlier static and generalized nudges (such as placing healthy food at eye level in a cafeteria), hyper-nudges operate continuously, adaptively, and pervasively. Drawing on vast datasets, they construct personalized choice architectures based on users’ habits and vulnerabilities.

At their core, hyper-nudges identify and exploit cognitive biases at the level of the individual user, for instance, by manipulating item order or focusing on emotionally charged content, reinforcing impulsive engagement through defaults and cliffhangers. In doing so, recommender systems do not necessarily cater to pre-existing preferences but produce them; they alter the mental “plausible path” through which information is processed and choices are reached (Johnson 2021).

Autonomy Under Threat

An emerging literature has raised autonomy concerns in the context of recommender systems (Milano et al. 2020). Debates on autonomy cover a wide field. Here, I focus on Joseph Raz’s influential notion of autonomy as self-authorship, which he explicitly formulated for the modern world of dynamic technological change. In his book, The Morality of Freedom, Raz identifies three crucial conditions of autonomy. It is important to note that all three conditions come in degrees.

Figure 2: Conditions of Autonomy

Infographic illustrating the three conditions of autonomy: (1) Appropriate Mental Abilities, depicted by a brain with gears; (2) Adequate Range of Options, represented by various icons including sunlight, a briefcase, people, and a soccer ball; (3) Independence from Manipulation, shown as a person controlling puppets.

Source: Dold, 2025

1. Appropriate Mental Abilities: Raz emphasizes the crucial role of capacities for correct means-end reasoning. Recommender systems, particularly in entertainment or e-commerce domains, often disrupt this by fostering impulsive or passive consumption, undermining user’s ability to instantiate their original preference but rather follow the recommendations. The result is a less proactive, goal-directed choice; tellingly, some estimate that around 80% of viewing activity on Netflix or YouTube is driven by its recommender system. This number might be even higher for TikTok or Instagram Reels.

2. Adequate Range of Options: Autonomy, for Raz, requires meaningful variety in choices. However, recommender systems frequently result in algorithmic overfitting (“over-personalization”), restricting users to previously expressed interests and reducing exposure to novel options. The effect is amplified through “nudge stacking”, where recommendations follow users across platforms: a search for running shoes on Google leads to running gear on Amazon, and then to running videos on Instagram. Such constant, cross-platform recommendations can erode the variety of options necessary for recognizing the opportunity costs of one’s choices.

3. Independence: Finally, autonomy requires freedom from manipulation when forming one’s preferences and beliefs. Recommender systems used on news and social media significantly threaten this condition of autonomy, as personalized algorithms shape deeper-held beliefs, preferences, and even identity. Their subtle yet powerful influence may reshape individuals’ perceptions of themselves and their society. When recommender systems rely on feedback from the majority of similar users, they can produce winner-takes-all dynamics, where a few options dominate collective attention. This creates digital echo chambers, narrowing exposure to diverse viewpoints and reducing the breadth of opinions we encounter.

Regulatory Options: Meta-Choice Remedy

Given the autonomy threats recommender systems pose, they meet Oliver’s necessary conditions for budge interventions. They undermine fair exchange by leveraging information asymmetries and behaviorally informed tools such as framing effects, and might impose undue harm when considering Raz’s conditions of autonomy.

How might budges regulate recommender systems? Several regulatory pathways are possible, each with its own drawbacks (Wagner and Eidenmüller 2019):

  • Self-help remedies (e.g., ad blockers to reduce exposure to recommendations) often produce inefficient outcomes and disproportionately benefit technically skilled users.
  • Transparency requirements (e.g., mandatory reading of information about algorithmic processes before using a platform) tend to be ineffective due to cognitive overload and user disengagement from technical details, similar to the widespread disregard for cookie consent notices.
  • Substantive algorithmic standards (e.g., rules mandating a degree of serendipity in recommendations) face significant regulatory complexity and enforcement challenges, making them impractical in many contexts.

Given these limitations, effective user empowerment might require targeting early stages of the decision-making process. This means giving users a real choice between a personalized experience based on past behavior and similar users, and a non-personalized experience with random recommendations that encourage exploration. Some major online platforms already offer this option. Google, for example, lets users opt out of personalized advertising. However, the default is set to personalized ads, and the opt-out process is made unnecessarily sludgy.

Meta-choice budges would require platforms to provide simple, accessible options for opting out of personalized recommendations or selecting a blend of personalized and non-personalized content. Periodic prompts or “sunset clauses” would encourage users to reflect on and actively reassess their engagement with personalization. These mechanisms help counteract inertia and habitual use by creating decision points where users can reconsider whether personalization still serves their goals.

The way forward

Society and business influencing individuals’ preference formation is nothing new. While recommender systems inevitably reflect such influence, it is the unprecedented precision and scale of today’s algorithmic recommendations that should heighten concerns about autonomy. Given the use of hyper-nudges on these platforms, regulatory budges in the form of meta-choice options are worth considering: they would give consumers the choice between personalized, random, or mixed recommendations.

The hope is that meta-choice remedies would improve transparency and raise awareness of how personalization shapes online behavior. Meta-choice interventions would be minimal and relatively inexpensive to implement, both for regulators and online platforms. They could improve allocative efficiency by promoting genuine market competition based on consumer preferences rather than on algorithmic exploitation of users’ attention.

Providing meta-choice options aims to preserve the benefits recommender systems offer while safeguarding individual autonomy. As algorithm-driven personalization continues to expand in an oligopolistic online landscape dominated by a few major tech companies, ensuring consumer autonomy through thoughtful regulation remains a crucial task for practitioners in behavioral public policy.

Malte Dold  is Associate Professor in the Economics Department at Pomona College in California. His research lies at the intersection of behavioral economics, philosophy of economics, and history of economic thought. He works on a wide range of topics, most connect to the question of how situational framing and social environments shape decision-making processes and what constitutes individual agency when preferences and beliefs are context-dependent and change over time.

This post is based on a recent talk Malte gave at the PPE Society London’s first annual meeting, held at King’s College London from July 16-18, 2025. Click here to access the slides of his talk.