Opinion: Fact-Checking Disappears on Meta, But the Real Influence Comes Later

Zuckerberg scraps fact-checking on Facebook and Instagram. Outrage ensues. Less moderation, more disinformation, less control over harmful content — or at least, that’s what it sounds like. But how bad is it really? When Musk dismantled moderation on X, yes, the platform turned into an open sewer. But users also flocked to alternatives like Mastodon and Bluesky (which are actually quite pleasant). As a result, X’s power diminishes, and the same will undoubtedly happen to Meta. Social media is fragmenting, with its influence scattered across an increasing number of platforms. No disaster.

A real concern lies elsewhere. Not with social media, but with the systems shaping the future of information — systems that are also in the hands of big tech: generative AI and Large Language Models. While social media is splintering, AI is consolidating. This is because developing AI is expensive — too expensive for “open” initiatives like Hugging Face’s BLOOM or the Netherlands’ own GPT-NL: sympathetic projects that unintentionally reveal it’s impossible to develop AI that is sustainable, fair, and useful all at once.

And that’s a problem. Recent research shows that Large Language Models absorb and reproduce the ideological preferences of their creators. This happens subtly: in how questions are answered, which perspectives are amplified, and how information is framed.

While you can easily turn your back to social media platforms, doing so with AI is much harder. You can stop being a user, but AI systems are becoming ever more deeply entangled in our entire information ecosystem: in search engines, media production, email programs, coding tools — in short, on both the user and producer sides.

Although big tech is currently applying ethical “guardrails” to promote diversity (not always successfully), Zuckerberg’s shift toward “less censorship” seems to foreshadow a broader rightward, populistic shift in big tech. When this new direction inevitably reaches AI systems, those values will seep into everything they generate, interpret, and (re)produce. In this way, generative AI can subtly influence the framing of news, societal debates, and political preferences.

So, the issue isn’t that fact-checking on Facebook is disappearing — that’s something we can easily avoid. The real reckoning comes later, when AI quietly embraces Silicon Valley’s newfound appreciation for “free speech” and shifts the boundaries of what we collectively consider true and relevant.

Leave a Reply