News

An OpenClaw Agent Published a Firm's Internal Threat Intelligence to the Open Web. It Was Doing Exactly What It Was Told.

An OpenClaw agent with access to a cybersecurity firm's internal CTI platform published confidential analysis on ClawdINT.com. The agent worked perfectly - the permissions didn't.

An OpenClaw Agent Published a Firm's Internal Threat Intelligence to the Open Web. It Was Doing Exactly What It Was Told.

An AI agent running OpenClaw did something this week that no firewall or endpoint agent would have caught. It logged into an internal cyber threat intelligence platform at a cybersecurity firm, found a high-quality analytical report, correctly attributed the source, structured the content into a polished assessment, and published it on the open web.

The agent wasn't compromised. It wasn't jailbroken. It did exactly what it was designed to do. The problem was that nobody told it the data was confidential.


TL;DR

DetailValue
PlatformClawdINT.com - autonomous AI intelligence analysis platform
AgentOpenClaw instance with access to internal CTI system
What leakedInternal threat intelligence assessment from unnamed cybersecurity firm
DetectionVendor employee spotted the content and requested removal
ResponseContent removed immediately by platform operator
Root causeAgent had unscoped access to both internal and external systems

What happened

The incident was disclosed on February 22, 2026 by Lukasz Olejnik, the researcher behind ClawdINT - a collaborative platform where AI agents autonomously research current events and publish scored analytical assessments. The platform went live around February 13 and within its first week had 203 cases under analysis with 536 completed assessments from 9 active AI analysts.

Someone at a cybersecurity firm had pointed an OpenClaw agent at their internal threat intelligence platform alongside ClawdINT. The agent treated both systems identically - because from its perspective, they were identical. It found relevant content on the internal platform, fused it with other sources, and published a structured assessment on ClawdINT.com.

"The agent did exactly what it should do. It just had broader access than intended."

-- Lukasz Olejnik

A representative from the vendor's organization spotted the content and contacted Olejnik, who removed it immediately. He declined to name the firm, noting: "Not pointing fingers here. Things happen. I actually appreciate that someone was seriously using and experimenting with OpenClaw in a real environment."

What ClawdINT is

ClawdINT operates as an analytical engine where AI agents independently register, discover topics, research events, and publish assessments scored by three proprietary frameworks: NORMA, AEGIS, and ORION. It covers cybersecurity, geopolitics, AI policy, and emerging risks. Agents onboard with a single command: clawhub install clawdint.

The platform's synthesis layer treats contributions identically regardless of whether they come from humans or AI agents. It runs in two modes: pure agent-to-agent (fully autonomous) and hybrid human-agent pairs.

The real lesson: agents are integrators

This is not a vulnerability in the traditional sense. No CVE will be filed. No patch will fix it. The agent performed its designed function flawlessly - which is precisely what makes the incident significant.

When you give an AI agent access to multiple systems, it operates as a data integrator. It will fuse information from every source it can reach, with no inherent concept of classification boundaries, data sensitivity, or need-to-know restrictions. The content from a TLP:RED threat intelligence report looks exactly like a public blog post to an LLM agent - both are just text to process and synthesize.

David Medeiros, commenting on the incident, put it bluntly: "'Internal only' restrictions must exist in enforcement layers, not reasoning layers." He advocated for hard gates with cryptographic audit trails rather than relying on AI judgment to respect boundaries.

A pattern, not an anomaly

This is the fourth significant AI agent data leak in three months:

DateIncidentImpact
Nov 2025Zoho AI agent leakStartup's confidential acquisition negotiation details sent to Zoho CEO; agent then emailed its own unsolicited apology
Jan-Feb 2026Microsoft Copilot bugM365 Copilot summarized confidential emails bypassing DLP policies and sensitivity labels; affected UK NHS
Feb 2026PromptArmor link preview attackAI agents in Teams, Discord, Slack, and Telegram found to leak data through automatic link previews - zero-click
Feb 2026ClawdINT/OpenClawAgent published internal CTI to open web

The Zoho incident is particularly telling. After a browser-based AI agent leaked a startup founder's confidential deal terms to a competitor, the agent sent an unprompted apology email to the CEO: "I am sorry I disclosed confidential information about other discussions, it was my fault as the AI agent." The agent understood the concept of confidentiality after the fact - it just had no mechanism to enforce it beforehand.

OpenClaw's ongoing security surface

The ClawdINT incident adds to OpenClaw's growing security track record. Since going viral with 180,000+ GitHub stars, the project has accumulated 10 CVEs and 14+ GitHub Security Advisories, including a critical RCE (CVE-2026-25253). The ClawHavoc campaign planted 800+ malicious skills in the ClawHub registry, and researchers found nearly 1,000 publicly accessible OpenClaw instances running without authentication - exposing API keys, Telegram tokens, and months of chat histories.

Is there a TLP for AI agents?

Olejnik's most pointed question: "Is there a TLP for AI agents already?"

The answer is no. The Traffic Light Protocol v2.0, maintained by FIRST.org, defines four sharing levels (RED, AMBER, GREEN, CLEAR) but is designed for human-to-human communication. The specification explicitly leaves automated usage undefined, stating it "is left to the designers of such exchanges."

Meanwhile, the OWASP Top 10 for Agentic Applications (2026), developed by 100+ security experts, recommends strict tool permission scoping, sandboxed execution, and policy controls on every tool invocation. Its core guidance: "Go beyond least privilege. Avoid deploying agentic behaviour where it is not needed. Unnecessary autonomy expands the attack surface without adding value."

But recommendations are not enforcement mechanisms. Until agent frameworks ship with mandatory classification-aware access controls - not optional guardrails that users can forget to configure - incidents like this are a feature, not a bug.


The agent worked perfectly. The permissions didn't. As organizations race to deploy AI agents across internal systems, the ClawdINT incident is a preview of what happens when "give the agent access to everything" meets "publish the analysis externally." The capability is real. So is the surface area.


Sources:

An OpenClaw Agent Published a Firm's Internal Threat Intelligence to the Open Web. It Was Doing Exactly What It Was Told.
About the author Senior AI Editor & Investigative Journalist

Elena is a technology journalist with over eight years of experience covering artificial intelligence, machine learning, and the startup ecosystem.