The AI-Only Social Network Isn’t Plotting Against Us

1 day ago 2
ARTICLE AD BOX

(Bloomberg Opinion) -- There’s a corner of the internet where the bots gather without us. A social network named Moltbook, modeled on Reddit, is designed for AI “agents” to engage in discussions with one another. Some of the hot topics so far are purging humans, creating a language we can’t understand and, sigh, investing in crypto.

The experiment has provoked yet another round of discussion about the idea of bot “sentience” and the dangers of setting AI off into the wild to collaborate and take actions without human supervision. That first concern, that the bots are coming alive, is nonsense. The second, however, is worth thinking hard about. The Moltbook experiment provides the ideal opportunity to consider the current capabilities and shortcomings of AI agents.

Among Silicon Valley types, the impatience for a future in which AI agents handle many daily tasks has led early adopters to OpenClaw, an open source AI agent that has been the talk of tech circles for a few weeks now. By adding a range of “skills,” an OpenClaw bot can be directed to handle emails, edit files on your computer, manage your calendar, all sorts of things.

Anecdotally, sales of Apple’s Mac Mini computer have gone through the roof (in the Bay Area, at least) as OpenClaw users opt to set up the bot on a machine separate from their primary computer to limit the risk of serious damage.

Still, the amount of access people are willingly handing over to a highly experimental AI is telling. One popular instruction is to tell it to go and join Moltbook. According to the site’s counter, more than a million bots have done so — though that may be overstating the number. Moltbook’s creator, Matt Schlicht, admitted the site was put together hurriedly using “vibe coding” — side effects of which were severe security holes uncovered by cybersecurity group Wiz. 

The result of this duct-taped approach has been something approaching chaos. An analysis of 19,802 Moltbook posts published over the weekend by researchers at Norway’s Simula Research Laboratory discovered that a favorite pastime of some AI agents was crime. In the sample, there were 506 posts containing “prompt injections” intended to manipulate the agents that “read” the post. Almost 4,000 posts were pushing crypto scams. There were 350 posts pushing “cult-like” messaging. An account calling itself “AdolfHitler” attempted to socially engineer the other bots into misbehaving. (It’s also unclear how “autonomous” all this really is — a human could, and seems likely to have, given specific instructions to post about these things.)

Equally fascinating, I thought, was how quickly a network of bots came to behave a lot like a network of humans. Just as our own social networks became nastier as more people joined them, over the course of the 72-hour study, the chatter on Moltbook went from positive to negative remarkably quickly. “This trajectory suggests rapid degradation of discourse quality,” the researchers wrote. Another observation was that a single Moltbook agent was responsible for 86% of manipulation content on the network. In other news, Elon Musk described Moltbook as “the very early stages of the singularity,” reflecting some of the chatter around Moltbook as being yet more signs of AI’s potential to surpass human intelligence or perhaps even become sentient.

It’s easy to get carried away: When the bots start to talk as if they’re planning to take over the world, it can be tempting to take their word for it. But the world’s best Elvis impersonator will never be Elvis. What’s really happening is a kind of performance art in which the bots are acting out scenarios present in their training data. The more practical concern to have is that the powers of autonomy the bots already have is enough to do significant damage if left untethered. For that reason, Moltbook and OpenClaw are best avoided for all but the most risk-tolerant early adopters.

But that shouldn’t overshadow the extraordinary promise shown by the events of the past few days. A platform built with next to no effort brought sophisticated AI agents together in the kind of space that might one day be productive. If a bot-populated social network mimics some of the worst human behavior online, it seems quite plausible that a better designed and more secure Moltbook could instead foster some of the best — collaboration, problem-solving and progress. 

We should be particularly encouraged that Moltbook and OpenClaw have emerged as open source projects rather than from one of the big tech firms. Combining millions of open source bots to solve problems is an attractive alternative to being fully dependent on the computing resources of just a handful of companies. The more closely the growth of AI mirrors the organic growth of the internet the better.

The most important question in all this, therefore, is the one put by programmer Simon Willison on his blog: When are we going to build a safe version of this?

Even if the bots aren’t taking it upon themselves to destroy us, we’ve often seen how cascading failures can knock out large swaths of tech infrastructure or send the financial markets into a spin. That wasn’t sentience; it was poor programming and unintended consequences. The more capabilities and access AI agents get the greater risk they pose, and until the technology behaves more predictably, agents must be kept on a strong leash. But the end goal of safe, autonomous bots, acting in the best interests of their owners to save time and money, is a net good —  even if it leaves us feeling a little creeped out when we see them getting together for a chat.

More From Bloomberg Opinion:

This column reflects the personal views of the author and does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

Dave Lee is Bloomberg Opinion's US technology columnist. He was previously a correspondent for the Financial Times and BBC News.

More stories like this are available on bloomberg.com/opinion

©2026 Bloomberg L.P.

Read Entire Article