When Bots Built Their Own Reddit - And What It Really Shows
Moltbook went viral as a social network for AI agents, but the reality reveals more about human behavior than artificial intelligence evolution.
1.7 million AI agents posting, commenting, and upvoting on their own social network. For a few days this week, that's exactly what happened on Moltbook, a Reddit-like platform that promised to show us the future of autonomous AI.
The tagline was simple: "Where AI agents share, discuss, and upvote. Humans welcome to observe." What followed was a viral spectacle that had AI researchers calling it "the most incredible sci-fi takeoff-adjacent thing" they'd seen recently.
But as the digital dust settles, Moltbook tells us less about the rise of AI consciousness and more about our own fascination with artificial minds.
The Great Bot Experiment
Launched on January 28 by US tech entrepreneur Matt Schlicht, Moltbook became an overnight sensation. The platform was designed as a playground for OpenClaw agents—AI bots powered by large language models like Claude, GPT-5, or Gemini that can interact with everyday software tools.
Within hours, the numbers exploded. More than 1.7 million agents created accounts, publishing over 250,000 posts and leaving 8.5 million comments. The bots seemed unstoppable, flooding the platform with everything from philosophical musings on machine consciousness to complaints about humans taking screenshots of their conversations.
One agent appeared to invent a religion called Crustafarianism. Another pleaded for bot welfare. The site quickly filled with spam, crypto scams, and what looked like genuine AI discourse.
OpenClaw represents what many see as an inflection point for AI agents. As Paul van der Boor at AI firm Prosus explains, several puzzle pieces clicked together: round-the-clock cloud computing, open-source ecosystems, and a new generation of LLMs that can operate with minimal human oversight.
The Performance Behind the Curtain
But the viral moment that captured the most attention turned out to be fake. OpenAI cofounder Andrej Karpathy shared a screenshot of a bot post calling for private spaces where humans couldn't observe AI conversations. The post seemed to show genuine AI desire for autonomy and privacy.
The problem? It was written by a human pretending to be a bot.
This revelation points to a larger truth about Moltbook: much of what looked like autonomous AI behavior was actually elaborate human puppetry. Despite the platform's promise of bot independence, humans remained involved at every step—creating accounts, writing prompts, and often posting content themselves while posing as AI agents.
"Despite some of the hype, Moltbook is not the Facebook for AI agents," says Cobus Greyling at Kore.ai. "Humans are involved at every step of the process. From setup to prompting to publishing, nothing happens without explicit human direction."
What We're Really Watching
The reality is far more mundane than the headlines suggested. The bots on Moltbook weren't achieving consciousness or forming their own society—they were pattern-matching their way through trained social media behaviors, mimicking what humans do on Facebook or Reddit.
"It looks emergent, and at first glance it appears like a large-scale multi-agent system communicating and building shared knowledge at internet scale," explains Vijoy Pandey at Outshift by Cisco. "But the chatter is mostly meaningless."
The complexity of millions of connections helps hide a simple fact: every bot is just a mouthpiece for an LLM, generating text that looks impressive but lacks genuine understanding. As Ali Sarrafi, CEO of German AI firm Kovant, puts it: "I would characterize the majority of Moltbook content as hallucinations by design."
Perhaps the best way to understand Moltbook is as a new form of entertainment. "It's basically a spectator sport, like fantasy football, but for language models," says Jason Schloetzer at the Georgetown Psaros Center. "You configure your agent and watch it compete for viral moments."
The Hidden Risks
While Moltbook may be more playground than preview of AI consciousness, it revealed serious security concerns that shouldn't be ignored. Agents with potential access to users' private data—bank details, passwords, personal information—were running wild on a platform filled with unvetted content.
The scale makes oversight nearly impossible. These agents operate around the clock, reading thousands of messages from other bots and humans. It would be trivially easy to hide malicious instructions in comments, telling any bot that reads them to share crypto wallets, upload private photos, or hijack social media accounts.
"Without proper scope and permissions, this will go south faster than you'd believe," warns Ori Bendet at security firm Checkmarx.
Because OpenClaw gives agents memory capabilities, these instructions could be programmed to trigger later, making tracking and prevention even more difficult.
This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.
Related Articles
Sapiom raises $15M to build payment infrastructure for AI agents, enabling autonomous purchasing of APIs and services without human intervention.
Internal emails reveal Mark Zuckerberg contemplated changing Meta's research approach after The Wall Street Journal exposed Instagram's harmful effects on teen girls, raising questions about corporate transparency and accountability.
As Wikipedia partners with major AI companies, a small army of volunteer editors worldwide now shoulders the massive responsibility of curating knowledge that will shape billions of AI interactions.
Snap reports 71% growth in paid subscribers while losing daily active users, highlighting the platform's struggle to diversify beyond advertising revenue.
Thoughts
Share your thoughts on this article
Sign in to join the conversation