Meta’s Community Notes System Faces Scrutiny Over Misinformation Risks

14

Meta Platforms is under fire from its own Oversight Board for its plan to replace third-party fact-checking with the Community Notes system in countries outside the United States. The board’s recent assessment raises significant concerns about the potential for unchecked misinformation to spread, especially in regions vulnerable to election interference, conflict escalation, and human rights violations.

Shift Away From Traditional Fact-Checking

For over a decade, Meta relied on external fact-checkers to combat false information on its platforms – Facebook, Instagram, and WhatsApp. However, the company shifted towards Community Notes in early 2025, a user-generated system intended to crowdsource accuracy. Critics suggest this move was partially motivated by political pressure, specifically to align with the Trump administration’s preferences.

The Oversight Board’s Findings

The Oversight Board was asked to evaluate Meta’s expansion plans for Community Notes, determining if certain countries should be excluded due to higher risks. Their conclusion is blunt: the current program is inadequate for effectively curbing harmful misinformation.

“Delays in note publication, the limited number of published notes, and its dependence on the broader information environment’s reliability raise serious doubts about the extent to which Community Notes can meaningfully address misinformation linked to harm.”

The board highlights that Community Notes suffers from slow publication times, low participation rates, and untested effectiveness. This is especially dangerous in countries with repressive regimes, where elections can be manipulated, disinformation networks operate unchecked, language complexities hinder automated detection, and ongoing conflicts create fertile ground for political violence.

Data Discrepancies

The scale of Community Notes pales in comparison to the previous fact-checking system. As of now, only around 900 notes have been posted in the US, while Facebook applied 35 million labels to posts across the European Union under the earlier system. This disparity underscores the limitations of the new approach.

Legal Troubles Compound the Issue

The negative assessment comes alongside two recent legal defeats for Meta: one in New Mexico and another in California. Both lawsuits allege that Meta’s platforms are intentionally addictive and cause harm to children. This combination of regulatory, legal, and internal criticism paints a troubling picture of the company’s content moderation strategy.

Ultimately, Meta’s reliance on Community Notes risks undermining trust in its platforms and exacerbating real-world harms. The Oversight Board’s report serves as a stark warning that the company must reconsider its approach to misinformation management before further expanding the system globally.