OpenAI's Three-Word GPT-5.4 Tease - What It Means

OpenAI posted '5.4 sooner than you Think.' one hour after launching GPT-5.3 Instant. Here is what the Codex repo leaks already told us - and what the tweet does not.

OpenAI's Three-Word GPT-5.4 Tease - What It Means

On March 3, 2026, exactly one hour after shipping GPT-5.3 Instant, OpenAI's official X account posted five words: "5.4 sooner than you Think." No image. No thread. No link. The post hit 3 million views and 25,000 likes before most people had finished reading the 5.3 Instant changelog.

Claim / Our Take

  • Claimed: GPT-5.4 is imminent and will be a major Thinking-class release, possibly with 2M context and stateful AI
  • Verified: Two Codex repo PRs confirm full-resolution vision support and a Fast Mode priority tier are in development for GPT-5.4
  • Unverified: The 2M context window figure comes from a single influencer post with no code evidence; the "Think" capitalization theory is community speculation

What They Showed

The tweet itself says nothing technical. OpenAI released it one hour after GPT-5.3 Instant shipped - a timing choice that reads less like a scheduled announcement and more like a deliberate forward-tease to keep momentum going.

The community right away latched onto the capital T in "Think." The dominant interpretation, circulated by AI tracking account @AILeaksAndNews, is that GPT-5.4 will be a Thinking-class model - that the capitalization is a wink at OpenAI's internal Thinking mode branding. That reading is plausible. It's also entirely unconfirmed.

The more concrete signal isn't the tweet. It's what had already leaked from OpenAI's own Codex repository in the six days before it.

What the Code Actually Says

The Codex GitHub repository - a public repo - surfaced GPT-5.4 details four separate times before the tweet landed.

PR #13050: Full-Resolution Vision

On February 27, an OpenAI engineer submitted PR #13050 introducing a flag called view_image_original_resolution. The code sets a minimum version requirement of (5, 4) for the feature, which bypasses OpenAI's standard image compression and passes detail: "original" directly to the Responses API - sending raw PNG, JPEG, and WebP bytes instead of downsampled versions. The commit was force-pushed seven times in five hours after community screenshots circulated, with the version check edited down to gpt-5.3-codex or newer. The original diff is preserved in screenshots.

This is the most technically solid evidence: a specific, reviewable capability tied explicitly to GPT-5.4 in source code.

PR #13212: Fast Mode Priority Tier

On March 2, engineer pash-openai opened PR #13212 adding a Fast Mode toggle to the Codex terminal interface. The code introduces a ServiceTier enum with Standard and Fast variants, a /fast slash command, and service_tier=priority on API requests when Fast mode is active. The PR description referenced "toggle Fast mode for GPT-5.4" by name. Also force-removed within hours.

This points to a paid or tier-gated speed option - consistent with OpenAI's existing Plus/Pro tiering strategy.

Two Accidental Disclosures

Beyond the PRs, OpenAI Codex team member Tibo accidentally posted a screenshot showing GPT-5.4 as a selectable model in the Codex app with GPT-5.3-Codex. The post was deleted quickly. Separately, developer Corey Noles received a cybersecurity capability block in Codex referencing the internal string gpt-5.4-ab-arm1-1020-1p-codexswic-ev3 - an internal model identifier that matches OpenAI's naming convention for Codex-line variants.

We covered the first wave of these leaks in detail when they emerged.

What Is Verified vs Speculation

SignalSourceStatus
Full-resolution vision (detail: "original")PR #13050 source codeConfirmed
Fast Mode / priority service tierPR #13212 source codeConfirmed
GPT-5.4 model string in Codex errorDeveloper screenshotConfirmed
GPT-5.4 in Codex model selectorEmployee screenshot (deleted)Confirmed
2M token context windowSingle influencer post (Alex Finn)Unverified
Stateful AI / persistent sessionsOne outlet, no code evidenceUnverified
"Think" capitalization = Thinking modelCommunity inferenceSpeculation

The Gap

The tweet confirms GPT-5.4 exists and is close. It tells you nothing about what "sooner" means on OpenAI's calendar, which variant ships first, or which capabilities will actually land in the initial release.

GPT-5.3 Thinking and GPT-5.3 Pro haven't shipped yet today. OpenAI announcing 5.4 while the 5.3 rollout is incomplete suggests two possibilities: GPT-5.4 is another Codex-specific variant (following the 5.2-Codex and 5.3-Codex pattern), or the 5.3 Thinking upgrade and 5.4 are shipping as a combined moment - a way to clear the backlog in one announcement.

The release cadence since GPT-5 launched in August 2025 has been roughly five to six weeks between major variants. GPT-5.3 Instant shipped March 3. If the pattern holds, the next release lands between early and mid-April.

The 2M context figure - the most technically dramatic claim - appears in one influencer post and hasn't been corroborated by any code evidence across the four Codex leak events. That doesn't rule it out. It means it shouldn't be treated as confirmed until OpenAI says so.


What OpenAI has actually told developers is this: full-resolution vision and a priority speed tier are real and coded for GPT-5.4. The rest is reconstruction from five words and a capital letter.

Sources: OpenAI on X - PiunikaWeb - NxCode deep-dive - Geeky Gadgets

OpenAI's Three-Word GPT-5.4 Tease - What It Means
About the author AI Infrastructure & Open Source Reporter

Sophie is a journalist and former systems engineer who covers AI infrastructure, open-source models, and the developer tooling ecosystem.