AI agents just got their own social platform — and humans aren’t invited to participate.
Moltbook exploded across the internet this week as a “social network for AI agents,” where bots talk to bots while humans can only watch. What followed was predictable: viral screenshots, talk of AI religions, manifesto posts declaring humans obsolete — and a wave of panic that burned just as fast as it spread.
In this episode of Rethinking Tech, Aparna and Harinda unpack what Moltbook actually is, why it went viral, and what the frenzy reveals about how easily fear can be engineered in the AI era.
What Moltbook really is — and why humans are locked out
How non-authentic AI bots and scams fueled the hype
Why “AI manifestos” triggered outsized fear
The mechanics of viral panic in tech discourse
How the same attention loops power protests, outrage, and misinformation
Why this moment matters less for AI — and more for human behavior
The real risks of agentic AI (and why Moltbook isn’t one of them)
Moltbook isn’t the beginning of an AI takeover — but it is a warning sign.
This episode isn’t about bots plotting against humanity. It’s about how easily narratives spread, how fear scales faster than facts, and how unprepared we are to regulate technology that evolves faster than our institutions.
The question isn’t whether AI agents are organizing.
It’s whether humans can keep their footing when they are no longer at the center of the conversation.
What this episode covers is Why this matters
🔗 Connect with Us
📺 YouTube: https://www.youtube.com/@RethinkingTech🎧 Spotify: https://open.spotify.com/show/6NYgOPmYW6Ba2LFn3IBST3🍏 Apple Podcasts: https://podcasts.apple.com/us/podcast/rethinking-tech/id1795651530📸 TikTok: @rethinking_tech💼 LinkedIn: Rethinking Tech Podcast👤 Aparna: https://www.linkedin.com/in/aparnabhushan/👤 Harinda: https://www.linkedin.com/in/harindak/