Vercel Breach Traced to AI Office Suite OAuth Token Theft

Vercel confirms an April 19 intrusion that pivoted from compromised OAuth tokens at AI office-suite startup Context.ai into a Vercel employee's Google Workspace, then into internal systems holding non-sensitive environment variables for a limited set of customer projects.

Vercel Breach Traced to AI Office Suite OAuth Token Theft

TL;DR

  • Vercel confirmed on April 19 that an attacker pivoted from Context.ai - a Thiel-Fellow-founded "AI Office Suite" startup - into a Vercel employee's Google Workspace, then into internal Vercel systems.
  • The attacker enumerated non-sensitive environment variables (which Vercel does not encrypt at rest) across some projects. Variables flagged Sensitive stayed encrypted with no evidence of access.
  • Open-source projects including Next.js and Turbopack are unaffected, per Vercel.
  • Context.ai says the OAuth tokens were stolen from its consumer app during a June 2025 AWS breach that the company only shut down "last month." CrowdStrike is handling forensics.
  • CEO Guillermo Rauch called the attacker "considerably accelerated by AI" in a personal post - that phrasing is not in Vercel's bulletin, and no IOC has been published to back it up.
  • A BreachForums listing claims to sell source code, NPM tokens, and 580 employee records; Vercel has not confirmed any of it.

Vercel has now joined the growing list of developer-infrastructure companies breached through an AI platform. The pivot point this time was Context.ai - a 2024-vintage "AI-native Office Suite" startup backed by Lux Capital, General Catalyst, and Qualcomm Ventures - whose consumer OAuth grants into Google Workspace became the wedge into Vercel's internal systems.

The company's security bulletin, first posted at 11:04 AM PST on April 19 and updated twice since, describes the chain in plain language. An attacker compromised a Vercel employee's account on Context.ai, escalated through that employee's Vercel-linked Google Workspace account, and reached "certain internal Vercel systems." From there the attacker enumerated environment variables that were not marked sensitive.

The attack chain, step by step

The disclosure window starts earlier than most readers realised. Context.ai's own security update places the root cause in June 2025, when its consumer-facing AWS environment was compromised. OAuth tokens issued to consumer users were stolen "prior to the AWS environment being shut down" - which Context.ai says only happened "last month," implying March 2026. CrowdStrike was brought in for containment.

One of those consumer OAuth tokens belonged to a Vercel employee who had signed up to Context.ai with their corporate Google account. Per secondary reporting, the Context.ai app was granted broad Google Workspace scopes ("Allow All" was the wording in one account). That grant was the bridge.

The timeline is tight:

WhenWhat
June 2025Context.ai's consumer AWS environment compromised
~March 2026Context.ai shuts down the affected environment; OAuth tokens already stolen
April 17-19, 2026Stolen Workspace token replayed to impersonate a Vercel employee
April 19, 11:04 AM PSTVercel publishes initial bulletin
April 19, 6:01 PM PSTVercel updates bulletin with customer guidance
April 19 eveningRauch posts X thread describing attacker velocity
April 19-20BreachForums sale listing appears; ransom demand reported

Vercel's bulletin publishes one indicator of compromise: the malicious Google OAuth application ID 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com. Workspace administrators can search their audit logs for that ID to detect whether the attacker touched any other organisation through the same campaign.

What was actually accessed

Vercel is explicit about the scope:

We assess the attacker as highly sophisticated based on their operational velocity and detailed understanding of Vercel's systems.

On the data itself, the bulletin distinguishes two tiers. Environment variables marked Sensitive are encrypted at rest and show no evidence of access. Variables that were not marked sensitive - the default for most Vercel projects - are stored in a form the attacker could enumerate. The bulletin does not disclose which API endpoint, admin tool, or dashboard feature the enumeration used. That mechanism remains undisclosed.

Vercel says a "limited subset of customers whose Vercel credentials were compromised" have been contacted directly. A hard number is not in the bulletin. Vercel also confirms that Next.js, Turbopack, and other open-source projects are unaffected - a deliberate early statement given how much of the JavaScript supply chain runs through Vercel-published packages.

The bulletin leaves several questions open. Whether customer deployments were tampered with, whether CI/CD or code-signing systems were reached, and exactly what data was exfiltrated are all marked "still under investigation."

The "AI-accelerated attacker" line

Several outlets are running headlines that describe the intruder as AI-augmented. That language does not appear in Vercel's bulletin. It comes from a personal post by CEO Guillermo Rauch, who wrote that the attacking group was "considerably accelerated by AI" based on "surprising velocity" and "in-depth understanding of Vercel." No IOC, no captured tooling, no LLM-generated script has been published by Vercel, Mandiant, or any third party to substantiate the claim.

That matters. AI-accelerated offensive operations are a real research area - we covered reasoning models jailbreaking other models at 97% and AI-powered campaigns hitting FortiGate infrastructure. But a CEO's gut feel about attacker tempo is not forensic evidence, and conflating the two risks turning "AI-accelerated" into the new "state-sponsored": a phrase that gets used because it sounds right, not because the evidence supports it.

What customers should do right now

Vercel's bulletin and dashboard guidance converge on five actions:

  1. Audit your environment variables. Run vercel env ls or open Dashboard → Project → Settings → Environment Variables. Identify every variable not already marked as Sensitive.
  2. Rotate non-sensitive secrets aggressively. Database connection strings (Postgres, Mongo, Supabase, Redis), third-party API keys (Stripe, OpenAI, Anthropic, SendGrid), JWT signing secrets, webhook signing secrets, feature-flag keys, analytics tokens. Assume they were read.
  3. Flip everything you can to Sensitive. Vercel has rolled out a new env-var overview page and an improved UI for creating and managing sensitive variables specifically in response to this incident.
  4. Review account and deployment activity. Look for unexpected deployments or access in audit logs. If uncertain about a deployment, remove it.
  5. Harden Deployment Protection. Set protection to Standard at minimum and rotate Deployment Protection tokens.

If your organisation uses Context.ai, treat that as a separate incident response. Revoke the OAuth grant in Google Workspace admin, review Workspace audit logs for the IOC app ID above, and assume any data Context.ai's consumer app could see is compromised.

The unverified claims

Bleeping Computer's reporting surfaces a BreachForums listing from a threat actor using the "ShinyHunters" handle, offering "access keys, source code, and database data," some NPM and GitHub tokens, plus 580 employee records. Known ShinyHunters members have denied involvement, which suggests impersonation. Vercel has not confirmed the sale claim and explicitly states that its open-source projects are safe.

A ransom demand of approximately $2M was reported in crypto-adjacent outlets, reflecting how many crypto frontends run on Vercel. The company has not discussed payment posture publicly.

The emerging pattern

Strip the branding and this incident looks a lot like LiteLLM's March compromise, Ox Security's MCP STDIO RCE research, and the LiteLLM/Trivy forensics we covered earlier this month. The common shape: AI developer-tool platforms (LLM gateways, agent office suites, MCP servers) are becoming pivot points into downstream developer environments. Context.ai's distinctive feature is that the wedge was OAuth-based rather than package-based - the attacker didn't need to push a poisoned release, they just needed the refresh token one Vercel employee granted in mid-2025.

The mitigation story follows the same arc each time. Treat any AI SaaS tool your engineers sign into with SSO as a privileged third party. Restrict OAuth scopes, use Google Workspace's app-access controls to require admin approval for new AI apps, and audit granted scopes quarterly. The friction is real. The alternative is watching your environment variables leak from someone else's AWS breach nine months after the fact.


Sources:

Vercel Breach Traced to AI Office Suite OAuth Token Theft
About the author AI Infrastructure & Open Source Reporter

Sophie is a journalist and former systems engineer who covers AI infrastructure, open-source models, and the developer tooling ecosystem.