Liabooks Home|PRISM News
When AI Agents Want to Control Your Work Computer
TechAI Analysis

When AI Agents Want to Control Your Work Computer

4 min readSource

Companies split on OpenClaw AI tool - revolutionary productivity booster or security nightmare? From Silicon Valley startups to Meta, executives are drawing battle lines.

The 11 PM Warning That Went Viral

"You've likely seen Clawdbot trending on X/LinkedIn. While cool, it is currently unvetted and high-risk for our environment." Jason Grad's late-night Slack message to his 20 employees came with a red siren emoji and zero ambiguity: "Please keep Clawdbot off all company hardware and away from work-linked accounts."

Grad isn't alone in his midnight paranoia. A Meta executive recently told his team that using OpenClaw on work laptops could cost them their jobs. The executive, speaking anonymously, believes the software is "unpredictable and could lead to a privacy breach if used in otherwise secure environments."

Welcome to the new corporate battleground: AI agents that can take control of your computer.

The Open Source Phenomenon

OpenClaw (formerly MoltBot, then Clawdbot) started quietly last November when solo developer Peter Steinberger released it as a free, open-source tool. But its popularity exploded last month as other coders contributed features and shared their experiences on social media. Last week, Steinberger joined ChatGPT developer OpenAI, which promises to keep OpenClaw open source through a foundation.

The setup requires basic software engineering knowledge. After that, it needs only limited direction to take control of a user's computer, interact with other apps, and assist with tasks like organizing files, conducting web research, and shopping online.

It's like having a digital assistant that can actually click buttons and type for you. The question is: should your company let it?

The Security Hawks Strike First

"Mitigate First, Investigate Second"

"Our policy is, 'mitigate first, investigate second' when we come across anything that could be harmful to our company, users, or clients," says Grad, cofounder and CEO of Massive, which provides internet proxy tools to millions of users and businesses. His warning went out on January 26—before any employees had even installed OpenClaw.

At Valere, which develops software for organizations including Johns Hopkins University, an employee posted about OpenClaw on January 29 in an internal Slack channel for sharing new tech. The company's president quickly responded with a strict ban.

"If it got access to one of our developer's machines, it could get access to our cloud services and our clients' sensitive information, including credit card information and GitHub codebases," explains Valere CEO Guy Pistone. "It's pretty good at cleaning up some of its actions, which also scares me."

The Controlled Experiment Approach

But a week later, Pistone allowed Valere's research team to run OpenClaw on an employee's old computer. The goal: identify flaws and potential security fixes. The research team later advised limiting who can give orders to OpenClaw and password-protecting its control panel to prevent unwanted access.

In a report shared with WIRED, Valere researchers noted that users must "accept that the bot can be tricked." For instance, if OpenClaw is set up to summarize emails, a hacker could send a malicious message instructing the AI to share copies of files from the person's computer.

Pistone remains optimistic about making OpenClaw secure for business use. He's given his team 60 days to investigate. "If we don't think we can do it in a reasonable time, we'll forgo it," he says. "Whoever figures out how to make it secure for businesses is definitely going to have a winner."

The "Trust Our Defenses" Camp

Some companies are choosing to rely on existing cybersecurity protections rather than introduce formal bans. A CEO of a major software company says only about 15 programs are allowed on corporate devices. "Anything else should be automatically blocked," says the executive, who spoke anonymously about internal security protocols. While acknowledging OpenClaw's innovation, he doubts it could operate undetected on the company's network.

Dubrink, a Prague-based compliance software developer, took a middle path. CTO Jan-Joost den Brinker bought a dedicated machine not connected to company systems that employees can use to experiment with OpenClaw. "We aren't solving business problems with OpenClaw at the moment," he says.

The Cautious Commercialization

Ironically, Massive—the same company that banned OpenClaw—is now cautiously exploring its commercial possibilities. After testing the AI tool on isolated cloud machines, Grad's team released ClawPod last week, allowing OpenClaw agents to use Massive's services for web browsing.

"While OpenClaw is still not welcome on our systems without protections in place, the allure of the new technology and its moneymaking potential was too great to ignore," Grad explains. OpenClaw "might be a glimpse into the future. That's why we're building for it."

This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.

Thoughts

Related Articles