OpenAI Launches GPT-5.4-Cyber for Vetted Defenders Only
OpenAI's GPT-5.4-Cyber is a restricted model fine-tuned for defensive cybersecurity with binary reverse engineering and reduced refusal rates, available only through identity-verified access tiers - a direct response to Anthropic's Mythos Preview.

TL;DR
- OpenAI launched GPT-5.4-Cyber, a GPT-5.4 variant fine-tuned for defensive cybersecurity with reduced refusal rates on legitimate security tasks
- Key capability: binary reverse engineering - analyzing compiled software for malware and vulnerabilities without source code
- Access requires identity verification through OpenAI's Trusted Access for Cyber (TAC) program, launched alongside a $10M cybersecurity grant in February
- OpenAI's Codex Security has already contributed to fixes for 3,000+ critical and high-severity vulnerabilities across the open-source ecosystem
- The launch came exactly one week after Anthropic's Project Glasswing, with Bloomberg framing it as a direct race against Claude Mythos
OpenAI shipped GPT-5.4-Cyber on April 14 - a fine-tuned variant of GPT-5.4 built specifically for defensive cybersecurity. It's not available to the public. You need to prove you're a defender before you can use it.
The timing - exactly seven days after Anthropic launched Project Glasswing and Claude Mythos Preview - makes the competitive positioning impossible to ignore.
What GPT-5.4-Cyber does
The model is described as "cyber-permissive": purpose-built to lower refusal boundaries on legitimate security tasks while maintaining safeguards against misuse. Where standard GPT-5.4 declines to generate exploit analysis or dissect malware samples, Cyber is trained to engage.
The headline capability is binary reverse engineering - analyzing compiled software for malware, vulnerabilities, and security weaknesses without requiring access to source code. For security teams that spend their days staring at disassembled binaries in Ghidra or IDA Pro, a model that can reason about compiled code at speed is a genuine workflow change.
OpenAI frames the model as the beginning of a series: "fine-tuning our models specifically to enable defensive cybersecurity use cases, starting today with a variant of GPT-5.4 trained to be cyber-permissive." More capable variants, they say, will require stricter deployment controls.
The access model: verify first, then unlock
Access runs through the Trusted Access for Cyber (TAC) program, which OpenAI launched in February 2026 alongside a $10 million cybersecurity grant program. TAC implements tiered identity verification:
- Individual defenders verify identity at chatgpt.com/cyber
- Enterprise teams request access through OpenAI representatives
- Existing TAC members can apply for higher tiers separately
- Only the highest verification tier unlocks GPT-5.4-Cyber
OpenAI says the program has expanded to "thousands of verified security professionals" and "hundreds of security teams." That's a wider net than Anthropic's Glasswing, which restricts Mythos Preview to approximately 52 organizations.
The philosophical shift is notable: instead of building one model and restricting what it can do (the traditional safety approach), OpenAI is building a model that can do more and restricting who can use it. Identity replaces capability as the control layer.
Codex Security: the track record
GPT-5.4-Cyber doesn't exist in a vacuum. OpenAI's Codex Security product launched in private beta six months ago and as a broader research preview earlier in 2026. The numbers, per SiliconANGLE:
- Contributed to fixes for 3,000+ critical and high-severity vulnerabilities across the ecosystem
- Codex for Open Source provides free security scanning to 1,000+ projects
- CTF (capture-the-flag) performance improved from 27% (GPT-5, August 2025) to 76% (GPT-5.1-Codex-Max, November 2025)
Those CTF numbers are still below Mythos Preview's CyberGym score of 83.1%, but the trajectory is steep. And importantly, OpenAI's security tooling is already deployed at scale - Anthropic's Glasswing is still in its first weeks.
Mythos vs Cyber: the comparison
| Claude Mythos Preview | GPT-5.4-Cyber | |
|---|---|---|
| CyberGym | 83.1% | Not published |
| SWE-Bench Verified | 93.9% | ~77% (base GPT-5.4) |
| Binary reverse engineering | Yes (confirmed in red team report) | Yes (headline feature) |
| Access model | ~52 orgs via Glasswing | Thousands via TAC tiers |
| Pricing | $25/$125 per M tokens | Not disclosed |
| Open-source program | Claude for Open Source | Codex for Open Source (1,000+ projects) |
| Coalition | 12 partners, $100M credits | $10M grant program |
Mythos appears more capable on paper. Cyber appears more accessible. Anthropic built a coalition; OpenAI built a verification portal. Both arrived at the same conclusion: the models are too dangerous for public release but too useful to keep locked away entirely.
What this means
The AI cybersecurity arms race now has two confirmed fronts. Both labs have restricted models, identity-gated access, and open-source programs. Both are racing to arm defenders before the offensive capabilities they've demonstrated become available through open-weight models.
The unresolved tension: Alex Stamos estimated at the Glasswing launch that open-weight models are roughly six months behind frontier models on vulnerability finding. When that gap closes, the entire premise of restricted access - that controlling distribution controls risk - stops working.
GPT-5.4-Cyber and Mythos Preview are bets that six months is enough time to patch the world's most critical software. The clock is running.
Sources:
