Misinformation thrives in an environment where platform providers shape the rules yet evade responsibility. As companies like Meta and X dismantle fact-checking and relax content moderation, users are left to navigate a fragmented and often deceptive digital landscape. But behavioural science offers solutions. In this blog, Christoph M. Abels walks us through the strategies that help individuals critically assess information, resist manipulation, and adapt to a world where platforms no longer seriously attempt to verify content for accuracy.
Those who shape the flow of information are retreating from accountability. Meta and X, once committed, are now largely walking away from systematic content moderation. In their place, they offer a crowdsourced patchwork like Community Notes, shifting the burden of accuracy onto users. But can the wisdom of the crowd outmatch the scale and sophistication of misinformation? Research suggests otherwise. If platforms refuse to act, users must be equipped to resist manipulation themselves. The challenge is clear: without intervention, misinformation remains unchecked. The solution? Strategies that empower individuals to think critically and discern truth from falsehood.
Unravelling the safeguards
Early January – just 13 days before the inauguration of the 47th U.S. president, Donald Trump – Meta CEO Mark Zuckerberg announced a significant shift in the company’s approach to misinformation. Meta’s platforms (Facebook, Instagram, and Threads) in the U.S. will no longer rely on third-party fact-checking programs. Instead, they will adopt a Twitter / X-style “Community Notes” system, a decentralized mechanism that allows users to append context to potentially misleading content. The announcement briefly revisits the origins of Meta’s fact-checking initiative – it was implemented in 2016 in the wake of Trump’s first election victory. Now, however, the program has been discontinued, with Meta citing concerns about the inaccuracy of the fact-checkers and that it had “too often become a tool to censor”.
According to a video statement by Zuckerberg, this shift was driven by pressure from “government and legacy media,” which, he claimed, forced the company to engage in increasing censorship. He also explicitly referenced the election and a “cultural tipping point” toward prioritizing free speech once again.
While Zuckerberg suggests that public sentiment favours free speech over stricter content moderation, research paints a more nuanced picture. A study found that U.S. individuals preferred removing harmful misinformation over protecting free speech, with Republicans notably less supportive of such removals compared to Democrats and Independents. Given Republican opposition to content moderation policies and the incoming administration’s particularly unfavourable stance, Meta appears to be aligning itself with the new political landscape. As a result, Community Notes will play a central role in Meta’s content moderation strategy.
What is Community Notes and can it work?
The programme is a crowdsourced fact-checking system that allows verified users to append context to potentially misleading posts, relying on a bridging-based algorithm to prioritize notes deemed helpful across ideological divides. Originally called Birdwatch, the programme was launched in 2021 and was never intended as a replacement for content moderation, but as a complementary measure.
Community Notes became the centrepiece of Twitter’s “evolving approach” to misinformation when Elon Musk took over Twitter in 2022 (aptly captured by The Verge’s headline, “Welcome to Hell, Elon”). This shift coincided with significant staffing cuts to the team responsible for handling misinformation, just days before U.S. midterm elections. Musk has since declared Community Notes “the best source of truth on the internet”, with the remit of covering all accounts being portrayed as a democratisation of the platform.
But research by the Center for Countering Digital Hate (CCDH), a nonprofit research organization, highlights critical flaws in the system. Community Notes are only displayed to the public if contributors reach a consensus – an obstacle when addressing polarizing and contested issues. As a result, the system is largely incapacitated. Ahead of the 2024 U.S. elections, the CCDH found that for 74% of sampled posts with misleading content, accurate Community Notes were available but did not reach the public. Meanwhile, posts promoting false claims without Community Notes were significantly more widespread – by a factor of 13 – compared to those that had been fact-checked.
The CCDH further notes that after years of efforts to establish imperfect but functional content moderation systems, platforms can now disregard these norms with few consequences, despite the ensuing chaos.
What makes misinformation this problematic?
Why does this matter? Tackling misinformation is not just an intellectual pursuit – it has real-world consequences. Research shows that exposure to misinformation can directly undermine both individual and public health. For example, false claims about COVID-19 vaccines have been found to reduce vaccination intent by 6.2 percentage points. Beyond that, frequent viewers of Fox News – where the effectiveness of COVID-19 vaccines has been repeatedly questioned – were less likely to get vaccinated.
Misinformation does not just distort reality – it weakens democracy. In a recent article in the Behavioural Public Policy journal, colleagues and I have argued that democratic backsliding is fuelled by political elites who violate norms. But such violations rarely occur in a vacuum. They require a passive or complicit public, one that misinformation can help manufacture. For example, by falsely claiming an election was rigged to justify disrupting a peaceful transfer of power, including support for violent political activism, or by undermining trust in institutions critical to public safety.
The danger, as we outlined, is that no one knows which broken norm will be the tipping point. Each violation increases the likelihood of democracy sliding into autocracy. In this sense, misinformation is not just noise – it is a serious risk factor for democratic erosion.
Empowering users through behavioural science-based interventions
As misinformation proliferates, the work of the behavioural science community has rarely been more critical. Researchers have underscored the structural vulnerabilities of the information ecosystem. This is an environment whose architecture is largely shaped by private entities such as Meta and X. When these companies prove reluctant to implement measures to curb the spread of harmful content, the available options narrow, and the task increasingly falls to platform users.
There are ways for us to challenge misinformation. Chief among them is the strategy of equipping users with the tools to better navigate the digital landscape and enhance their resilience to manipulation. The behavioural sciences have developed various interventions that are suitable in this respect. A recently published overview of individual-level interventions against misinformation categorizes them into three broad approaches: nudges, refutation strategies, and boosts or educational interventions.
Nudges seek to shape choice architectures in ways that subtly steer decision-making toward individually or socially beneficial outcomes. In the online sphere, these interventions often aim to increase attentiveness to accuracy (with accuracy prompts), introduce friction to slow the spread of misinformation (requiring users to read an article before sharing it), or leverage social norms (highlighting disapproval when false information is shared). A study across 16 countries found that individuals who rely more strongly on intuition tend to be worse at distinguishing truth from falsehood. This suggests that nudges encouraging more analytical thinking could be particularly effective. However, such interventions ultimately depend on platform providers’ willingness to implement them – a commitment that, given recent developments, cannot be assumed.
Refutation strategies focus on countering misinformation. This includes debunking false claims after they have circulated, or pre-emptively warning individuals about misleading content before they encounter it – so-called prebunking (comprehensively outlined in The Debunking Handbook). The goal is to correct false beliefs or, ideally, prevent their formation altogether. Refutation strategies on social media often take the form of fact-checking labels or credibility indicators, whereas prebunking seeks to bolster users’ resistance to misinformation before exposure. The latter is arguably preferable, as it mitigates belief formation at the outset and does not require platform buy-in.
Most promising, however, are boosting approaches. Rather than relying on external gatekeepers, boosting aims to enhance individuals’ cognitive and media literacy skills, equipping them with the competencies needed to navigate the information landscape. Inoculation, for instance, trains users to recognize manipulative tactics embedded in misleading information—thus straddling the line between refutation and boosting. Digital interventions, such as short videos, pop-up messages, or online games, offer scalable ways to reinforce these skills (an overview can be found here). While large-scale implementation remains a challenge, empowering users with transferable competencies presents the most sustainable strategy for addressing the evolving nature of misinformation.
Where next?
The internet offers no shortage of two things: drama and misinformation. And while the former has fuelled much of the latter, the companies that provide the infrastructure for our digital lives and daily communication seem unbothered – if not complicit – in amplifying both. As technology companies continue to dismantle their efforts to curb misinformation, users are left to fend for themselves. Yet, they are not entirely without resources. Over the years, the behavioural sciences have developed a range of interventions aimed at strengthening individuals’ ability to navigate the digital information landscape – reducing their susceptibility to misleading or outright false content. Ultimately, if platforms refuse to take responsibility, the best defence may be an informed and resilient public.
Christoph M. Abels is a Post-Doctoral Fellow in the Department of Psychology at the University of Potsdam. He is part of the ERC project PRODEMINFO (Protecting the Democratic Information Space in Europe), where he focuses on differing conceptions of truth and their implications for democracy. He holds a PhD in Governance from the Hertie School in Berlin.
