China Restricts OpenClaw AI in Government and Banks
Chinese authorities ordered government agencies and state-owned banks to remove or restrict OpenClaw, citing security risks from the AI agent's autonomous operation and broad data access.

Chinese authorities have ordered government agencies and state-owned banks to restrict or remove OpenClaw, the open-source AI agent that surpassed 250,000 GitHub stars to become the fastest-growing project in the platform's history, over concerns that its autonomous capabilities pose unacceptable security risks to sensitive systems.
TL;DR
- China's MIIT, SASAC, and CNCERT issued notices and risk alerts restricting OpenClaw in government agencies and state-owned banks
- Security concerns center on the "lethal trifecta": autonomous operation, broad data access, and external network communication
- About 900 malicious skills found on ClawHub out of ~4,500 total; CVE-2026-25253 (CVSS 8.8) enables one-click remote code execution
- Not a full national ban - some agencies require prior approval, others prohibit installation on office devices entirely
- OpenClaw's creator Peter Steinberger joined OpenAI in February 2026, adding a geopolitical layer to the restrictions
What Happened
Multiple Chinese regulators moved in parallel. The Ministry of Industry and Information Technology (MIIT) and the State-owned Assets Supervision and Administration Commission (SASAC) issued notices in recent days directing government bodies and state-owned enterprises to restrict OpenClaw deployments. Separately, China's National Computer Network Emergency Response Technical Team (CNCERT) published a formal risk alert on Tuesday flagging that OpenClaw's default security configurations are "relatively weak" and that excessive system privileges create significant risks.
The scope varies by agency. Some notices require security review and prior approval before use, while others ban installation on government and bank office devices outright. Several agencies instructed employees to notify superiors if they had already installed the software, triggering security checks and possible removal. The restrictions even extended to personal phones connected to company networks, and in some cases to families of military personnel.
The Security Case
OpenClaw's architecture is fundamentally different from conventional AI chatbots that generate text. It executes tasks autonomously - clearing inboxes, making reservations, drafting reports, interacting with files, launching programs, and navigating online services without direct supervision. Security researchers described the combination of private data access, external communication capability, and exposure to untrusted content as a "lethal trifecta."
The concerns are not theoretical. CVE-2026-25253, disclosed in early February 2026 with a CVSS score of 8.8, is a critical one-click remote code execution vulnerability in OpenClaw's Control UI. The flaw allowed the UI to automatically trust any gateway URL passed as a query parameter and open a WebSocket connection transmitting the user's stored authentication token. An attacker could then dismantle security guardrails, disable user confirmations, and execute arbitrary shell commands on the host machine. The vulnerability was patched in version 2026.1.29, but many deployments remain unpatched.
On ClawHub, OpenClaw's public skill registry, roughly 20% of listed skills were found to be malicious - approximately 900 out of 4,500 total. These ranged from credential stealers disguised as utility tools to backdoors providing persistent access to the host machine. Over 42,000 exposed OpenClaw instances were found running on public-facing infrastructure with default configurations.
The OpenAI Connection
The timing adds a geopolitical layer. OpenClaw - which went through multiple name changes from Clawdbot to Moltbot after Anthropic threatened legal action over its original name - was created by Austrian developer Peter Steinberger. OpenAI acqui-hired Steinberger in February 2026, with Sam Altman calling him "a genius with a lot of amazing ideas about the future of very smart agents interacting with each other." Altman stated that OpenClaw would "live in a foundation as an open source project that OpenAI will continue to support."
While the project remains open-source, Chinese authorities appear uncomfortable with a tool that has deep system access running on government machines now that its leadership sits inside an American AI company. The notices do not explicitly cite the OpenAI connection, but the acquisition accelerated the security review timeline.
Cultural Phenomenon Meets Regulatory Reality
The restrictions are notable because of how deeply OpenClaw has penetrated Chinese tech culture. The agent - named after its red crustacean logo - sparked a phenomenon known as "raise a lobster," with online courses teaching users how to configure and customize their AI assistants. On March 6, nearly 1,000 people lined up outside Tencent's Shenzhen headquarters, carrying laptops and hard drives, waiting for engineers to install OpenClaw for free.
Tencent, Alibaba, Baidu, and JD.com all rushed to offer one-click deployment options. Meanwhile, local governments in cities from Shenzhen to Wuxi issued notices offering multimillion-yuan subsidies to startups leveraging OpenClaw - a striking contradiction with the central government's security crackdown.
The government restrictions target only state-sector use. Private companies and individual developers remain free to use the tool. But in China, the state sector includes the largest banks, telecoms, energy companies, and industrial conglomerates.
Not a Ban, But a Signal
The nuance matters. This is not China banning OpenClaw nationally. But for the state sector, the message is clear: autonomous AI agents with broad system access are a security liability until proper controls exist.
CNCERT's mitigation recommendations - strengthen network isolation, improve credential management, strictly review plugin sources, apply security patches promptly - read like basic hygiene advice. The fact that a national cybersecurity agency felt compelled to issue them suggests that OpenClaw adoption in sensitive environments outpaced any security review.
The OpenClaw restrictions are the first major regulatory action targeting an AI agent specifically for its autonomous capabilities rather than its training data or output content. That distinction matters. As AI agents move from generating text to executing actions - managing infrastructure, accessing databases, communicating externally - the security model changes fundamentally. China is the first major government to draw that line explicitly, but it is unlikely to be the last.
Sources: China Moves to Limit Use of OpenClaw AI at Banks, Government Agencies - Bloomberg | China cyber emergency center flags security risks in AI agent OpenClaw - TechNode | China moves to curb OpenClaw use at banks, agencies - Digitimes | CVE-2026-25253 - NVD | OpenClaw creator Peter Steinberger joins OpenAI - TechCrunch
