News

Criminals Are Vibe-Coding Malware Now. The First Samples Are Worse Than You Think.

From a ransomware that accidentally destroys its own decryption keys to an 88,000-line Linux framework built by one person in a week - AI-generated malware is here, and its fingerprints are unmistakable.

Criminals Are Vibe-Coding Malware Now. The First Samples Are Worse Than You Think.

TL;DR

  • VoidLink: an 88,000-line Linux malware framework built by a single person in under a week using AI coding agent TRAE SOLO - the first documented case of a full malware platform authored almost entirely by AI
  • Sicarii ransomware: vibe-coded RaaS that accidentally discards its own decryption keys, making recovery impossible for both victim and attacker
  • PROMPTFLUX and PROMPTSTEAL: malware samples with LLM API calls embedded directly in the source code, querying Gemini and Hugging Face models at runtime
  • Dark web AI tools (GhostGPT, WormGPT, FraudGPT) are selling for $50-$200/month, requiring zero technical expertise

The question is no longer whether criminals are using AI to write malware. The question is how fast the ecosystem is maturing. The answer, based on samples catalogued by Check Point, Google, Halcyon, ESET, Darktrace, and Palo Alto Networks over the past six months, is: faster than the security industry expected.

"Everybody's asking: Is vibe coding used in malware? And the answer, right now, is very likely yes," said Kate Middagh, Senior Consulting Director at Palo Alto Networks Unit 42.

She is being diplomatic. The evidence is no longer circumstantial.

The most significant specimen is VoidLink, a sophisticated Linux malware framework discovered by Check Point Research in late 2025. Written in Zig with an arsenal in C and a backend in Go, VoidLink is designed for long-term stealthy access to cloud environments and includes eBPF and LKM rootkits, cloud enumeration modules, and a modular C2 infrastructure.

What makes VoidLink unique is not what it does but how it was built. The developer used TRAE SOLO, an AI coding agent embedded in the TRAE IDE, and left the receipts: Chinese-language instruction documents, sprint plans, design specifications, and development timelines - all generated by the AI tool and preserved in the codebase.

"VoidLink stands as the first evidently documented case...authored almost entirely by artificial intelligence, likely under the direction of a single individual," Check Point concluded.

The documentation structure initially suggested a multi-team organization. It was one person. The AI enabled them to plan, develop, and iterate a complex malware platform in days - "something that previously required coordinated teams and significant resources."

Check Point's parting question: "How many other sophisticated malware frameworks out there were built using AI, but left no artifacts to tell?"

The Malware That Cannot Decrypt Itself

If VoidLink represents what happens when a competent operator uses AI, Sicarii ransomware shows what happens when an incompetent one does.

Sicarii emerged in December 2025 as a ransomware-as-a-service operation. It uses AES-GCM encryption with 256-bit keys, exploits CVE-2025-64446 in Fortinet devices, and includes geo-fencing that refuses execution on Israeli systems. On paper, it looks professional.

In practice, it has a catastrophic bug: it generates a new RSA key pair on the victim system during each execution and immediately discards the private key once encryption completes. Neither the victim nor the attacker can decrypt the files.

"Halcyon assesses with moderate confidence that the developers may have used AI-assisted tooling, which could have contributed to this implementation error."

The telltale signs: the code contains excessive inline documentation, hardcoded values that should have been parameterized, and implementation patterns consistent with AI generation. Halcyon's key material capture technology was able to intercept encryption keys before the buggy routine completed, enabling recovery without ransom payment. Sicarii subsequently released updated versions fixing the bug - presumably after running the code through another AI session.

Malware That Calls the AI During Execution

The most technically novel specimens embed LLM API calls directly in the malware, querying AI models at runtime:

PROMPTFLUX

Discovered by Google's Threat Intelligence Group in June 2025, PROMPTFLUX is a VBScript dropper that queries the Gemini API to rewrite its own source code. One variant instructs the API to act as an "expert VBScript obfuscator" and regenerates itself on an hourly basis, creating "just-in-time" polymorphism that evades static signature detection.

PROMPTSTEAL

Linked to Russia's APT28 (FROZENLAKE), PROMPTSTEAL masquerades as an image generation program. While guiding users through image prompts, it queries the Hugging Face API (Qwen2.5-Coder-32B-Instruct) in the background to generate reconnaissance commands, then blindly executes the LLM's output locally before exfiltrating collected data. It represents the first observed state-sponsored malware querying an LLM in live operations.

PromptSpy

Discovered by ESET in early 2026, PromptSpy is the first known Android malware to use generative AI in its execution flow. It sends Google Gemini a natural language prompt along with an XML dump of the current screen; Gemini responds with JSON instructions telling the malware what action to perform. This makes it adaptive to virtually any device, screen size, or UI layout.

The AI Malware Marketplace

The dark web ecosystem for AI-powered offense has matured rapidly:

ToolPriceDistributionCapabilities
GhostGPT$50/week - $300/3 monthsTelegramMalware creation, BEC scams, phishing
FraudGPT$200/month - $1,700/yearDark web marketsSpear-phishing, undetectable malware, phishing pages
WormGPT variants~EUR 60/subscriptionTelegram, BreachForumsPhishing, PowerShell credential stealers, polymorphic malware

These are not bespoke models. Security researchers at Cato Networks found that new WormGPT variants are wrappers around xAI's Grok and Mistral's Mixtral with jailbreak system prompts. One variant (keanu-WormGPT) was posted on BreachForums in February 2025. The barrier to entry is a Telegram account and a credit card.

"In 2025, AI gained a foothold in cybercrime. In 2026, it will dominate," stated Malwarebytes' ThreatDown 2026 State of Malware report.

How to Spot AI-Generated Malware

The irony of AI-generated malware is that it is often easier to analyze than human-written malware. Traditional malware authors deliberately obscure their code. AI does the opposite:

  • Excessive inline comments explaining every line of code
  • Native-language function names and variables
  • README files with attack execution instructions bundled with the malware
  • Hardcoded decryption keys, server URLs, and C2 addresses left in code
  • "Educational/Research Purpose Only" disclaimers - residue from jailbreak prompts
  • Typos like "readme.txtt" instead of "readme.txt" - "a mistake that a threat actor would never make," as Unit 42's Middagh noted
  • Accidental inclusion of decryption tools within ransomware packages (the Susvsex VS Code extension shipped both its ransomware and two separate decryptors)

The Scale

The numbers paint a clear picture of acceleration:

  • 87% of global organizations experienced AI-enabled cyberattacks in 2025
  • 82.6% of phishing emails now use AI in some form
  • 53% year-over-year increase in extorted ransomware victims (Check Point 2026)
  • 4x faster attacks - fastest intrusions now reach data exfiltration in 72 minutes (Unit 42)
  • $5.72 million average cost of an AI-powered breach (13% increase)
  • 36% of malicious webpages now use runtime LLM assembly to generate attack payloads dynamically

The React2Shell exploit demonstrated the endpoint of this trend: a single LLM prompting session generated a functioning exploit framework that compromised approximately 91 hosts. The profit was trivial (0.015 XMR, roughly $5). The precedent is not.

What It Does Not Tell You

Law enforcement has made arrests in AI-adjacent cybercrime - Operation Red Card 2.0 resulted in 651 suspects detained across 16 African countries - but no specific arrests for AI-generated malware creation have been reported. The legal framework has not caught up. Few police academies train cadets on identifying AI-generated threats. Miami Dade College announced it will be one of the first U.S. police academies to do so.

The defensive asymmetry is real. Organizations using security AI and automation experience $1.8 million lower average breach costs. But only about half the organizations Unit 42 works with have any limits on AI usage at all.


The pattern emerging from these samples is bifurcated. Sophisticated actors like the VoidLink developer use AI to produce advanced malware at unprecedented speed. Unskilled actors like the Sicarii developers produce malware with catastrophic bugs they cannot fix. Both categories are growing. The security implications for AI-generated code extend beyond legitimate development - the same tools that let engineers ship code from their commute let threat actors ship malware from theirs.

Sources:

About the author Senior AI Editor & Investigative Journalist

Elena is a technology journalist with over eight years of experience covering artificial intelligence, machine learning, and the startup ecosystem.