The digital battleground is shifting, and big tech is stepping back just as misinformation is surging forward. In the latest episode of The Angle Podcast, Kavisha Pillay, social justice activist and founder of the Campaign on Digital Ethics, joined us to discuss the deepening crisis of disinformation, the failure of social media platforms to regulate harmful content, and what it means for the future of digital accountability.
From Facebook’s decision to step away from active moderation to the unchecked spread of fake news, conspiracy theories, and political propaganda, Pillay unpacks how big tech profits from the seeming chaos—and why it refuses to invest in meaningful solutions.
Facebook’s dangerous retreat from moderation
At the heart of the conversation is Mark Zuckerberg’s announcement earlier in 2025 that Facebook would scale back its moderation efforts, shifting the responsibility to a community-driven system of fact-checking.
“It’s a flawed approach,” Pillay argues. “Truth shouldn’t be a popularity contest. When we leave fact-checking to unverified users, we’re inviting misinformation to spread unchecked.”
Platforms like Facebook and X (formerly Twitter) have long struggled to manage the tension between free speech and responsible content regulation, but Pillay believes that this latest move is less about policy and more about profit.
“Moderation is expensive. Fact-checking is expensive. But rather than investing in public safety, these companies prioritize engagement-driven models that reward outrage and falsehoods.”
The algorithmic machine – why disinformation thrives
One of the major themes of our discussion is how platform algorithms fuel misinformation—not by accident, but by design. While we didn’t all agree with this point, there is some responsibility for a platform that derives profit from interaction, sharing and the growth of more interaction on the platform in question.
There are some truths of how these platforms can work. Outrage drives clicks, and clicks drive profit. The more controversial or emotionally charged a post, the more engagement it generates, keeping users on the platform longer.
Echo chambers create reinforcing loops. Once you engage with a certain type of content, algorithms ensure you see more of it—whether it’s harmless entertainment or dangerous disinformation.
Fact-checking is de-prioritised. Even when flagged, misleading content often spreads faster than corrections, making damage control nearly impossible.
“We’ve seen how this plays out,” Pillay notes. “From xenophobic violence to election interference, digital platforms have repeatedly failed to prevent real-world harm. They claim neutrality, but their algorithms are anything but neutral.”
So where do we go from here?
If big tech won’t fix the problem, Pillay believes the burden will fall on users, independent media, and regulators to push for ethical digital spaces.
Along with this is the need for stronger digital literacy initiatives to help people recognise misinformation and manipulation tactics.
“People need to understand how they are being manipulated online,” she says. “Misinformation isn’t just about fake news—it’s about how digital ecosystems are designed to keep us hooked on outrage.”
There will also need to be increased push for platform accountability – these platforms need to be more transparent about how content is ranked, moderated, and monetised.
Pillay makes a point which I do agree with completely – it is possible to have an ethical big tech platform. Profits are fine, profits are good, but there can be those things alongside a safety priority for creators over engagement at all cost business model.
At The Angle, we are committed to unpacking these complex intersections between media, technology, and power in Africa and beyond. If you care about the future of news, democracy, and digital integrity, this episode is a must-listen.