News

Kali Linux Published a Guide Piping Pentest Tools Through Claude's API - Without Mentioning Data Security Once

Kali Linux's new Claude AI integration funnels scan results, target IPs, and discovered vulnerabilities through Anthropic's cloud API, and the guide's only privacy note is a parenthetical shrug.

Kali Linux Published a Guide Piping Pentest Tools Through Claude's API - Without Mentioning Data Security Once

Kali Linux shipped a guide this week showing how to pipe pentesting tools - nmap, sqlmap, Metasploit, Hydra, and more - through Anthropic's Claude API using the Model Context Protocol. The guide walks users through setting up an autonomous hacking workflow where Claude interprets natural language commands, selects the right tool, runs it on a remote Kali box, and returns analyzed results.

The guide's entire data security advisory is one parenthetical sentence: "This scenario may work for you, or it may not be acceptable to you (e.g. privacy). That is fine."

"Kali just published a guide on piping pentesting tools through Claude's API and didn't mention data security once. You're sending scan results, target info, and potentially sensitive findings to a third party LLM. 'The Most Advanced Penetration Testing Distribution' should probably mention that."

  • Justin Elze (@HackingLZ), CTO of TrustedSec

TL;DR

  • Kali Linux's new mcp-kali-server package routes pentest tool output through Anthropic's cloud API via Claude Desktop
  • The guide covers nmap, sqlmap, Metasploit, Hydra, gobuster, nikto, and enum4linux-ng - all piped through a third-party LLM
  • The only privacy mention is a single parenthetical: "(e.g. privacy). That is fine."
  • Pentest engagement data - internal IPs, credentials, vulnerability findings, client infrastructure details - goes to Anthropic's servers
  • TrustedSec CTO Justin Elze warned: "This is going to end up getting someone fired"

What Kali Published

The guide, titled "Kali & LLM: macOS with Claude Desktop GUI & Anthropic Sonnet LLM," describes a three-component architecture:

ComponentRoleWhere It Runs
Claude DesktopGUI/MCP clientmacOS
mcp-kali-serverTool execution bridge, port 5000Kali Linux (remote)
Claude Sonnet (Anthropic API)Interprets commands, analyzes outputAnthropic's cloud

A user types a natural-language prompt like "Port scan scanme.nmap.org and check if a security.txt file exists." Claude interprets it, selects nmap, executes the scan via MCP on the remote Kali box, parses the output, and returns analyzed findings. The entire tool chain - reconnaissance, enumeration, exploitation, credential attacks - runs through this loop.

The mcp-kali-server package is installable via sudo apt install mcp-kali-server. The tool documentation page on kali.org contains zero data privacy disclaimers.

What Flows to Anthropic

Every command output that Claude processes crosses the network to Anthropic's servers. In a typical pentest engagement, that includes:

  • Internal IP addresses and hostnames of client infrastructure
  • Open ports, running services, and software versions
  • Discovered vulnerabilities and exploit paths
  • Credentials, password hashes, and configuration files
  • Directory listings revealing application structure
  • SQL injection results containing database contents
  • Client-identifying information (domain names, IP ranges, organization names)

As Horizon3.ai noted in their analysis of AI pentesting tools: "It looks like normal HTTPS traffic to api.anthropic.com. Visibility becomes difficult when data leaves your environment through model API calls."

The Compliance Problem

The issue isn't whether Claude's API is secure in transit. The issue is that pentest engagement data is leaving the tester's controlled environment and landing on a third party's servers - often without the client's knowledge or contractual authorization.

RiskImpactWho Gets Hurt
NDA violationClient engagement data shared with unauthorized third partyPentester, consulting firm
Regulatory breachGDPR, HIPAA, PCI-DSS restrict where security data can be processedClient, pentester
No DPA coverageData sent without Data Processing Agreement in placeBoth parties
Audit trail gapLimited logging of exactly what was sent to the LLMClient compliance team
Data retentionAnthropic retains API data up to 30 days; flagged content up to 2 yearsClient
Professional liabilityUnauthorized data sharing could invalidate pentest insuranceConsulting firm

Most pentest contracts include strict data handling clauses. Sending client scan results to Anthropic's cloud without explicit written authorization would violate the terms of virtually every professional engagement agreement in the industry.

Anthropic's Actual Data Policies

To be fair to Anthropic, their API data handling is fairly conservative by cloud provider standards. API inputs and outputs aren't used for model training by default. Retention is automatic deletion within 30 days, reduced to 7 days as of September 2025. Enterprise customers can sign a Zero Data Retention addendum.

But the guide uses Claude Desktop - the consumer product. For consumer-tier use, Anthropic's policies are different: data may be used for training unless users explicitly opt out, with retention extending to 5 years. The guide does not mention this distinction, nor does it recommend API-tier access over consumer-tier for professional engagements.

Counter-Argument

The AI-assisted pentesting workflow that Kali is describing is truly useful. Automating tool selection, command construction, and output analysis saves hours of manual work. The MCP integration is technically well-designed. And Kali did include a one-line acknowledgment that privacy might be a concern.

Defenders of the approach also point out that preexisting tools have similar data exposure risks. BurpGPT, an AI extension for Burp Suite, sends HTTP traffic data to OpenAI's cloud by default. Multiple open-source AI pentest frameworks - Cyber-AutoAgent, Villager, and others - chain reconnaissance and exploitation through external LLM APIs. The practice is more widespread than Kali's guide alone.

The difference is that Kali is the most recognized name in offensive security. When Kali publishes an official guide with sudo apt install packaging, it carries implicit endorsement. A community project on GitHub sending data to a LLM is a known risk that users assess themselves. An official Kali tool page with zero privacy disclaimers suggests the workflow has been vetted - and in this case, it hasn't been vetted for the thing that matters most in professional pentesting: client data protection.

BurpGPT addressed similar criticism by adding local LLM support. Kali's guide doesn't mention local model alternatives at all, despite Kali Linux being perfectly suited to running local inference with open-source models.

What the Market Is Missing

Justin Elze is the CTO and Director of Research at TrustedSec, one of the most respected offensive security consulting firms in the industry. His criticism is not academic. When he says "this is going to end up getting someone fired," he's speaking from direct experience with the contractual and legal requirements around handling client pentest data.

The broader pattern here is the same one playing out across every industry touching AI: the rush to ship AI integrations is outpacing the security review process. Kali built a technically impressive tool chain and published a guide that would fail the compliance review at any serious pentest firm on the planet.

As we noted in our AI cybersecurity platforms review, the integration of AI into security workflows is inevitable. But "inevitable" isn't the same as "safe by default." The tools that survive long-term will be the ones that treat data governance as a feature, not a parenthetical.

Kali Linux has not responded to the criticism. The guide remains published in its original form. The mcp-kali-server documentation still contains no data security warnings.


The irony is hard to miss. The distribution that built its reputation on helping security professionals find vulnerabilities just shipped a workflow with a data handling vulnerability that any junior compliance analyst would catch. The fix isn't complicated: add a data security section to the guide, warn about NDA implications, recommend local LLM alternatives for client engagements, and document what flows to Anthropic's servers. Until then, "The Most Advanced Penetration Testing Distribution" is telling its users to send client secrets to the cloud and not think too hard about it.

Sources:

Kali Linux Published a Guide Piping Pentest Tools Through Claude's API - Without Mentioning Data Security Once
About the author AI Industry & Policy Reporter

Daniel is a tech reporter who covers the business side of artificial intelligence - funding rounds, corporate strategy, regulatory battles, and the power dynamics between the labs racing to build frontier models.