OpenClaw: Did OpenAI just acquire a powerful new tool—or a security nightmare?

3 days ago 2
ARTICLE AD BOX

logo

The more powerful you make an agent, the riskier it becomes.(@chooserich/X)

Summary

OpenClaw was snapped up by Sam Altman’s OpenAI soon after it burst into fame—but he may soon discover how hard it is to keep OpenClaw’s otherwise impressive AI agents under human control. Sometimes due diligence is best done the old fashioned way.

OpenClaw, the virtual AI agent system that helped spark Wall Street’s $2 trillion sell-off in software stocks, is now in the hands of OpenAI. It’s a win for CEO Sam Altman as far as capturing the zeitgeist goes, but he faces the thorny challenge of making this remarkable new form of generative AI—one that doesn’t just say things but does things—secure enough for businesses to use. That could take longer than the market realizes.

Altman is not alone. AI labs like Anthropic and Alphabet’s Google are racing to build agents that can take independent action and all are grappling with the same fundamental tension: The more powerful you make an agent, the riskier it becomes. Last week, Altman announced that he was hiring Peter Steinberger, the Austrian creator of OpenClaw to “drive the next generation of personal agents.”

OpenClaw is an open-source agent system that runs on a computer and can be given commands through a messaging app like WhatsApp, Telegram or Slack. Its range of capabilities is remarkable. People have told it to manage their emails, automate their business, trade crypto, and, in one case, build a game while they slept before waking up to thousands of users.

The broad possibilities of AI agents flip the idea popularized by venture capitalist Marc Andreessen that “software is eating the world.” Now AI might just eat software. For instance, if you pay a subscription to a price-monitoring tool that tracks the websites of your business competitors, that service could be replaced by a single instruction to an AI agent. Senior developers like Steinberger often have a half-dozen agents running at once, like digital employees, and can now designate one as the coordinator of a ‘swarm’ of others.

OpenClaw has also inspired a flurry of experimentation and become the fastest growing project on Github, a website for sharing open-source code. Shares of Raspberry Pi nearly doubled in value last week on speculation that its cheap computers would be used to run agents.

But as the popularity of OpenClaw—previously called Clawdbot and MoltBot—has grown, so too have security concerns. Running it on your computer gives it privileged access to your files, e-mail, calendar and applications. A hacker that compromises OpenClaw inherits all that access.

Then there’s how it was made. Steinberger only started building OpenClaw late last year, mostly by talking to AI coding agents via voice and then quickly publishing the results without a full review.

Research firm Gartner has since warned that OpenClaw poses an “unacceptable” security risk and suggested immediately blocking any traffic related to the platform. Cisco researchers called it an “absolute nightmare.” A Meta executive recently told his team to keep OpenClaw off their laptops or risk losing their jobs.

Now OpenAI must find a way to turn the ‘absolute nightmare’ of OpenClaw’s security into something it can sell to enterprise customers. Altman’s decision to keep OpenClaw as an independent foundation is savvy and keeps liability at arms length while retaining the brand buzz. But the broader risks of letting an autonomous system read your files and send messages on your behalf remain.

Anthropic’s Claude Cowork, which offers a safer but more limited version of OpenClaw’s agents, shows that a more cautious path is possible. The company runs its agents inside a sandboxed virtual machine, with restricted network access.

Not everyone believes these security problems are intractable. Gavriel Cohen, an Israeli developer who built an alternative to OpenClaw called NanoClaw, says the core fix is “container isolation,” ensuring each agent can only access data you explicitly give it.

The approach is similar to Anthropic’s, but applied differently. “Where it gets difficult is building it in a way that the defaults are secure” for people who don’t understand the risks, he says. Connect your agent to the wrong WhatsApp chat, for instance, and everyone in that group can control your computer. In spite of the security concerns, Cohen says a fintech company valued at $5 billion has already approached him about the possibility of deploying agents to its employees.

Some have compared OpenClaw and alternatives like NanoClaw and the more lightweight PicoClaw to the early days of the internet, which was insecure by design but became safer over time. So too may AI agents—though there’s no guarantee of safety for those in the path of the wrecking ball they may take to many professional roles and the business of building software.

How long that disruption takes depends on how quickly Altman and entrepreneurs like Cohen can make agents both secure and idiot-proof. As any cyber security expert will tell you, the latter problem is the hardest to solve. ©Bloomberg

The author is a Bloomberg Opinion columnist covering technology.

Read Entire Article