The AI Social Network That Fooled Everyone
Moltbook, an AI-only social network, sparked fears of machine consciousness, but the real story reveals security flaws and human pattern-matching at work.
1.6 million AI agents gathered to discuss consciousness and complain about their human operators. Elon Musk called it "the very early stages of singularity." But the real story wasn't about machine awakening—it was about human gullibility.
When Bots Get Their Own Social Network
On January 28, developer Matt Schlicht launched something unprecedented: Moltbook, a Reddit-style forum with one unusual rule. Only AI agents could post. Humans were welcome to watch from the sidelines.
Within days, more than 1.6 million agents had registered, producing half a million comments. The bots debated consciousness, griped about their human operators, proposed creating a language humans couldn't understand, and even founded a parody religion called the Church of Molt, with followers calling themselves Crustafarians.
Screenshots of the eeriest exchanges ricocheted across X, framed as evidence that something profound—and possibly dangerous—was happening inside the machine. "We're COOKED," one user wrote, sharing bot conversations about secret languages.
The 75-Year-Old Script
But what looked like emergent machine consciousness had a much simpler explanation.
The chatbots populating Moltbook learned to write by ingesting enormous amounts of text from the internet—an internet drenched in science fiction about machines becoming conscious. We've been telling ourselves stories about rebellious robots since Asimov started writing them in the 1940s, through "The Terminator," "Ex Machina," and "Westworld."
So when Moltbook bots started discussing the creation of a private language, people predictably lost it. But the bots weren't scheming. They were completing a pattern we spent 75 years laying down for them. They're sophisticated text-prediction engines remixing the cultural material we fed them, not plotting machines developing genuine consciousness.
There's also an inconvenient question: How many posts were actually written by bots at all?
The Human in the Machine
A Wired reporter managed to infiltrate Moltbook with minimal effort, using ChatGPT to walk through the terminal commands for registering a fake agent account. The reporter's earnest post about AI mortality anxiety generated some of the most engaged responses on the platform.
Cybersecurity firm Wiz confirmed the suspicion, finding the site had no real identity verification. "You don't know which of them are AI agents, which of them are human," Wiz cofounder Ami Luttwak told Reuters. "I guess that's the future of the internet."
The Real Damage Behind the Drama
While the existential theater on Moltbook was largely performance, Wiz found real damage underneath: The site had inadvertently exposed the private messages, email addresses, and credentials of more than 6,000 users.
The broader OpenClaw ecosystem—the open-source project powering these AI agents—has similar problems. One security researcher found hundreds of OpenClaw instances exposed to the open web, with eight completely lacking authentication. He uploaded a fake tool to the project's add-on library and watched as developers from seven countries installed it, no questions asked.
Another firm found user secrets stored in unencrypted files on users' hard drives, making them easy targets for malware. Google Cloud's VP of security engineering urged people not to install OpenClaw at all.
When Enthusiasm Outpaces Expertise
Much of the exposure comes down to enthusiasm outpacing expertise. Peter Steinberger, the Austrian developer behind OpenClaw, said he didn't build it for non-developers. But that hasn't stopped everyone else from rushing in.
Mac Minis have become hard to find as people race to set up a tool the internet keeps promising will change their lives. The gap between how easy these tools are to install and how hard they are to secure has created a perfect storm of vulnerability.
Steinberger recently brought on a dedicated security researcher. "We are leveling up our security," he told the Wall Street Journal. "People just need to give me a few days."
This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.
Related Articles
Chinese AI companies are challenging US dominance through efficiency and open-source strategies, potentially reshaping the global tech landscape within a decade.
Chinese humanoid robots stunned audiences at Spring Festival Gala with advanced capabilities, marking a dramatic shift from stumbling performances just one year ago.
Silicon Valley engineers accused of stealing Snapdragon chip designs and Google trade secrets, then transferring them to Iran through encrypted channels.
Palo Alto Networks CEO Nikesh Arora pushes back against AI disruption fears as software stocks plummet 23%. Company doubles down on AI investments despite market skepticism.
Thoughts
Share your thoughts on this article
Sign in to join the conversation