As Europeans headed to the polls for the 2024 European Parliament elections, social media platforms faced increasing scrutiny over their efforts to combat disinformation. A new analysis by Spanish fact-checking organization Maldita reveals that major tech companies are failing to combat false and misleading content related to the elections.
The analysis draws on data from the “Elections24Check” project, a European initiative that compiled a database of posts identified as disinformation. This collaborative effort involved various fact-checking organizations, including the investigative journalism platform Correctiv and prominent news agencies like AFP and DPA. The project’s database served as the foundation for Maldita’s evaluation of how social media platforms responded to flagged content during the European election period.
The study examined over 1,300 posts from 26 EU member states and found that platforms took no action against 43% of content identified as disinformation by fact-checkers.
Platform | Action | No Action |
---|---|---|
88.83% | 11.17% | |
70.73% | 29.27% | |
TikTok | 40.00% | 60.00% |
X | 29.33% | 70.67% |
YouTube | 24.49% | 75.51% |
TOTAL | 56.85% | 43.15% |
However, the response varied significantly between different social media sites.
YouTube emerged as the worst performer, failing to take any visible action on 75% of disinformation content. When the platform did respond, it often merely displayed generic information panels or labeled state media sources without addressing the false claims directly.
“Some of these videos have reached 500,000 views,” the report notes, highlighting the potential reach of unchecked misinformation.
X (formerly Twitter) also performed poorly, with no visible action taken on 70% of flagged posts. The platform’s community notes feature, which allows users to add context to misleading tweets, was only present on 15% of posts already debunked by fact-checkers.
Maldita’s analysis found that X hosted 18 of the 20 most viral pieces of disinformation that went unmoderated, with some posts garnering over 1.5 million views each.
TikTok showed a slightly better response rate, taking action on 40% of disinformation posts. However, the video-sharing app’s approach primarily involved removing content entirely, which some critics argue reduces transparency.
Meta-owned platforms Facebook and Instagram demonstrated the highest moderation rates, addressing 88% and 70% of flagged content. Facebook’s approach focused on adding context through fact-checking labels while preserving the original posts.
The report highlights particular concerns about the platforms’ handling of misinformation related to immigration and electoral integrity. Content targeting migrants received no action in 57% of cases, while posts undermining faith in the electoral process went unchallenged 56% of the time.
The study points out that “YouTube and TikTok had zero percent response rates for misinformation targeting migrants,” underscoring a significant blind spot in content moderation.
These findings come at a critical time, as the European Union’s Digital Services Act (DSA) now requires large online platforms to take “appropriate, proportionate and effective measures” against disinformation. The European Commission has already launched investigations into X, Meta, and TikTok’s parent company, ByteDance, over concerns about their risk management and content moderation practices.