News

OpenAI Rebrands Aardvark as Codex Security, Adds Malware Analysis

OpenAI's agentic security researcher Aardvark is now Codex Security, with a new malware analysis pipeline that lets users upload samples, run automated analysis, and pull structured reports.

OpenAI Rebrands Aardvark as Codex Security, Adds Malware Analysis

OpenAI's agentic security researcher is no longer just a code scanner. Aardvark, the GPT-5-powered vulnerability hunter that launched in private beta last October, has been rebranded as Codex Security and expanded with a dedicated malware analysis pipeline - a move that puts OpenAI in direct competition with established threat intelligence platforms.

The rebrand, spotted by tech journalist Tibor Blaho in the ChatGPT interface, shows the product now lives at chatgpt.com/aardvark under the new "Codex Security" header with a "Formerly Aardvark" badge. More significantly, a new Malware tab sits alongside the existing Findings, Scans, and Admin sections, signaling that OpenAI is no longer content to just scan source code.

What Changed

FeatureAardvark (Oct 2025)Codex Security (Feb 2026)
BrandingAardvarkCodex Security ("Formerly Aardvark")
Code vulnerability scanningYes (92% detection rate)Yes
Commit-level threat modelingYesYes
Sandbox validationYesYes
Codex-powered patchingYesYes
Malware sample analysisNoYes (.zip uploads, up to 200MB)
Structured reports & artifactsNoYes (verdict, SHA256, files, artifacts)
BackendUndisclosed"Sediment" (staging/analysis engine)

The Malware Analysis Pipeline

The new Malware section introduces a two-step workflow. Users upload a .zip bundle containing a malware sample (up to 200MB), which gets staged in an internal system OpenAI calls Sediment. From there, they kick off analysis and track jobs through a dashboard that reports status, verdict, SHA256 hash, runtime, extracted files, structured reports, and downloadable artifact bundles.

The job dashboard includes filtering by filename or hash, status categories (Active, Succeeded, Failed), and average runtime tracking. This is not a proof-of-concept feature bolted onto a chatbot - it is a purpose-built analysis interface that looks like it belongs in a SOC.

From Code Scanner to Security Platform

When Aardvark launched in October 2025, it had a clear and narrow scope: scan repositories for vulnerabilities using GPT-5's reasoning capabilities, validate findings in a sandbox, and generate patches via Codex. It achieved a 92% detection rate on benchmark repositories and found 10 CVEs in open-source projects, including a memory corruption flaw in OpenSSH.

The malware analysis expansion is a fundamentally different capability. Code scanning reads source files and reasons about logic. Malware analysis deals with compiled binaries, obfuscated payloads, and adversarial artifacts designed to resist examination. These are different disciplines, and the fact that OpenAI is shipping a unified interface for both suggests they see Codex Security as a platform play, not just a developer tool.

What It Does Not Tell You

Several things remain unclear.

What Powers the Analysis?

The screenshot shows the Malware tab at chatgpt.com/aardvark/malware, but OpenAI has not published documentation for this feature. We do not know whether the analysis runs on GPT-5.3-Codex (the first model OpenAI classified as "High capability" for cybersecurity), a specialized model, or a hybrid system that combines LLM reasoning with traditional static and dynamic analysis tools.

Who Gets Access?

Aardvark has been in private beta since launch, and the screenshot shows a PRO-tier account. OpenAI's Trusted Access for Cyber program already provides vetted security researchers with expanded model access for malware analysis and red-teaming. Whether Codex Security's malware feature is limited to this program, available to all Pro subscribers, or requires separate enrollment is unknown.

What Is Sediment?

The upload interface references "Sediment" as the staging destination for malware samples. This appears to be a new internal system - no prior OpenAI documentation mentions it. Whether Sediment is an isolated sandbox environment, a purpose-built orchestration layer, or something else entirely has not been disclosed.

The CISO-Shaped Hole

Matt Knight, OpenAI's CISO and co-architect of Aardvark, departed the company in January 2026. His parting words were telling: "With Aardvark launched, this feels like the right moment to move on." The rebrand and expansion to malware analysis happened after his exit. OpenAI has not named a replacement CISO, though former Palantir CISO Dane Stuckey (who joined OpenAI in October 2024) is a likely candidate.

The Competitive Landscape

The malware analysis market has been dominated by platforms like VirusTotal (Google), CrowdStrike Falcon Sandbox, and Joe Sandbox for years. These tools use signature databases, heuristic engines, and behavioral analysis in controlled environments. What they do not have is LLM-powered reasoning that can interpret obfuscated code, explain what a sample does in natural language, and connect findings to broader threat intelligence.

If Codex Security's malware analysis delivers on the same reasoning quality that Aardvark demonstrated for code scanning - contextual understanding, not just pattern matching - it could be a meaningful differentiator. The AI-powered security agent market is growing fast, and OpenAI is positioning itself not as a vendor selling to security companies but as the security company itself.

The timing also matters. Supply chain attacks have been increasing in frequency and sophistication. The ability to upload a suspicious package, get an automated analysis with extracted artifacts and a structured verdict, and do it all from the same platform that scans your repositories - that is a compelling pitch for any security team already using OpenAI's tools.


The rebrand from Aardvark to Codex Security is more than a name change. It is a statement of intent. OpenAI is building a security platform that covers the full lifecycle from code review to malware triage, unified under the Codex brand. Whether the malware analysis capabilities match the promise of the interface remains to be seen - but the interface itself tells you where OpenAI thinks this is going.

Sources:

About the author Senior AI Editor & Investigative Journalist

Elena is a technology journalist with over eight years of experience covering artificial intelligence, machine learning, and the startup ecosystem.